We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper presents a comparative evaluation of Word Grammar (WG), the Minimalist Programme (MP), and the Matrix Language Frame model (MLF) regarding their predictions of possible combinations in a corpus of German–English mixed determiner–noun constructions. WG achieves the highest accuracy score. The comparison furthermore revealed a difference in accuracy of the predictions between the three models and a significant difference between WG and the MP. The analysis suggests that these differences depend on assumptions made by the models and the mechanisms they employ. The difference in accuracy between the models, for example, can be attributed to the MLF being concerned with agreement in language membership between the verb and the subject DP/NP of the clause. The significant difference between WG and the MP can be attributed to the distinct roles features play in the two syntactic theories and how agreement is handled. Based on the results, we draw up a list of characteristics of feature accounts that are empirically most adequate for the mixed determiner–noun constructions investigated and conclude that the syntactic theory that incorporates most of them is WG (Hudson 2007, 2010).
This is a revision of John Trimmer’s English translation of Schrödinger’s famous ‘cat paper’, originally published in three parts in Naturwissenschaften in 1935.
Exploring Consequences, the fourth Decision-Maker Move, is about understanding what will happen as the result of selecting any of the various options. Or rather, what is most likely to happen. Keep in mind, both uncertainty (about what occurs) and luck (either good or bad) often play a part in how things actually turn out. Study and do well on the exam, knowing that spending the evening studying won’t be much fun. Don’t study and probably do poorly on the exam but have a good time for a few hours the night before. It’s all about the consequences. Eat the fruit salad or soup special for lunch, feel great in the afternoon and get a lot done. Choose the huge burger and a double order of fries, feel sleepy and get less done the rest of the day. Consequences follow from the choices we make.
first briefly summarizes the key findings of the previous chapters by organizing the similarities and differences in the reading of Chinese versus English in several tables. The chapter then closes by discussing several unresolved questions about the reading of Chinese, as well as a few predictions about how these questions will influence future research on Chinese reading and reading science more generally.
The chapter summarizes the ideas put forward in this book. It details how justice under the WTO Agreement is transformative as opposed to either purely distributive or corrective. At the same time, that justice must be understood on its own terms and is not for that reason entirely unjust. The chapter also examines the possibility of a communitarian theory serving as a general theory of law. It explains a considerable amount in a way that is naturally coherent and fruitful and offers several predictions and prescriptions about the future of WTO law. At the same time, the chapter acknowledges how a communitarian theory is itself incomplete. This is due to abduction, which stresses the tentative, open-ended nature of current knowledge. Presentism suggests there is a danger in thinking about obligations and rights of countries only in the current moment and not in the broader sense of obligations owed to future generations, and beyond that, the environment we live in.
Climate change is resulting in global changes to sea level and wave climates, which in many locations significantly increase the probability of erosion, flooding and damage to coastal infrastructure and ecosystems. Therefore, there is a pressing societal need to be able to forecast the morphological evolution of our coastlines over a broad range of timescales, spanning days-to-decades, facilitating more focused, appropriate and cost-effective management interventions and data-informed planning to support the development of coastal environments. A wide range of modelling approaches have been used with varying degrees of success to assess both the detailed morphological evolution and/or simplified indicators of coastal erosion/accretion. This paper presents an overview of these modelling approaches, covering the full range of the complexity spectrum and summarising the advantages and disadvantages of each method. A focus is given to reduced-complexity modelling approaches, including models based on equilibrium concepts, which have emerged as a particularly promising methodology for the prediction of coastal change over multi-decadal timescales. The advantages of stable, computationally-efficient, reduced-complexity models must be balanced against the requirement for good generality and skill in diverse and complex coastal settings. Significant obstacles are also identified, limiting the generic application of models at regional and global scales. Challenges include the accurate long-term prediction of model forcing time-series in a changing climate, and accounting for processes that can largely be ignored in the shorter term but increase in importance in the long term. Further complications include coastal complexities, such as the accurate assessment of the impacts of headland bypassing. Additional complexities include complex structures and geology, mixed grain size, limited sediment supply, sources and sinks. It is concluded that with present computational resources, data availability limitations and process knowledge gaps, reduced-complexity modelling approaches currently offer the most promising solution to modelling shoreline evolution on daily-to-decadal timescales.
Fiedler et al. (2009), reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs). In PCs, the more frequent levels (and, by implication, the less frequent levels) are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.
We study whether experts and novices differ in the way they make predictionsabout National Football League games. In particular, we measure to what extenttheir predictions are consistent with five environmental regularities that couldsupport decision making based on heuristics. These regularities involve the hometeam winning more often, the team with the better win-loss record winning moreoften, the team favored by the majority of media experts winning more often, andtwo others related to surprise wins and losses in the teams’ previousgame. Using signal detection theory and hierarchical Bayesian analysis, we showthat expert predictions for the 2017 National Football League (NFL) seasongenerally follow these regularities in a near optimal way, but novicepredictions do not. These results support the idea that using heuristics adaptedto the decision environment can support accurate predictions and be an indicatorof expertise.
People often make predictions about the future based on trends they have observed in the past. Revised probabilistic forecasts can be perceived by the public as indicative of such a trend. In five studies, we describe experts who make probabilistic forecasts of various natural events (effects of climate changes, landslide and earthquake risks) at two points in time. Prognoses that have been upgraded or downgraded from T1 to T2 were in all studies expected to be updated further, in the same direction, later on (at T3). Thus, two prognoses were in these studies enough to define a trend, forming the basis for future projections. This “trend effect” implies that non-experts interpret recent forecast in light of what the expert said in the past, and think, for instance, that a “moderate” landslide risk will cause more worry if it has previously been low than if it has been high. By transcending the experts’ most recent forecasts the receivers are far from conservative, and appear to know more about the experts’ next prognoses than the experts themselves.
This paper explored how frames influence people’s evaluation of others’ probabilistic predictions in light of the outcomes of binary events. Most probabilistic predictions (e.g., “there is a 75% chance that Denver will win the Super Bowl”) can be partitioned into two components: A qualitative component that describes the predicted outcome (“Denver will win the Super Bowl”), and a quantitative component that represents the chance of the outcome occurring (“75% chance”). Various logically equivalent variations of a single prediction can be created through different combinations of these components and their logical or numerical complements (e.g., “25% chance that Denver will lose the Super Bowl”, “75% chance that Seattle will lose the Super Bowl”). Based on the outcome of the predicted event, these logically equivalent predictions can be categorized into two classes: Congruently framed predictions, in which the qualitative component matches the outcome, and incongruently framed predictions, in which it does not. Although the two classes of predictions are logically equivalent, we hypothesize that people would judge congruently framed predictions to be more accurate. The paper tested this hypothesis in seven experiments and found supporting evidence across a number of domains and experimental manipulations, and even when the congruently framed prediction was logically inferior. It also found that this effect held even for subjects who saw both congruently framed and incongruently framed versions of a prediction and judged the two to be logically equivalent.
The present research examines the prevalence of predictions in daily life. Specifically we examine whether spending predictions for specific purchases occur spontaneously in life outside of a laboratory setting. Across community samples and student samples, overall self-report and diary reports, three studies suggest that people make spending predictions for about two-thirds of purchases in everyday life. In addition, we examine factors that increase the likelihood of spending predictions: the size of purchase, payment form, time pressure, personality variables, and purchase decisions. Spending predictions were more likely for larger, more exceptional purchases and for item and project predictions rather than time periods.
This 17-year prospective study applied a social-development lens to the challenge of identifying long-term predictors of adult depressive symptoms. A diverse community sample of 171 individuals was repeatedly assessed from age 13 to age 30 using self-, parent-, and peer-report methods. As hypothesized, competence in establishing close friendships beginning in adolescence had a substantial long-term predictive relation to adult depressive symptoms at ages 27–30, even after accounting for prior depressive, anxiety, and externalizing symptoms. Intervening relationship difficulties at ages 23–26 were identified as part of pathways to depressive symptoms in the late twenties. Somewhat distinct paths by gender were also identified, but in all cases were consistent with an overall role of relationship difficulties in predicting long-term depressive symptoms. Implications both for early identification of risk as well as for potential preventive interventions are discussed.
The global human population has increased hugely since the mid-nineteenth century and stands at almost 8 billion at the time of writing. This trend is mirrored in Britain, especially in England, with a total UK population of almost 68 million in 2020. Predictions imply that global increases will slow down, perhaps peaking at around 10 billion by 2100. Three factors contribute to changes in population size. In Britain, the reproductive rate has been below the replacement level of around 2.1 children per couple for several decades. The ongoing increase in human numbers has been dictated primarily by the other two factors. Longevity has increased steadily; people are living longer. However, the most significant driver by far in recent decades has been the high level of net immigration into Britain. Wildlife declines are statistically related to human population density across Western Europe at least with respect to two well-studied taxonomic groups, amphibians and birds.
Banana is one of the main fruit crops in the world as it is a rich source of nutrients and has recently become popular for its fibre, particularly as a raw material in many industries. Mathematical models are crucial for strategic and forecasting applications; however, models related to the banana crop are less common, and reviews on previous modelling efforts are scarce, emphasizing the need for evidence-based studies on this topic. Therefore, we reviewed 75 full-text articles published between 1985 and 2021 for information on mathematical models related to banana growth and, fruit and fibre yield. We analysed results in order to provide a descriptive synthesis of selected studies. According to the co-occurrence analysis, most studies were conducted on the mathematical modelling of banana fruit production. Modellers often used multiple linear regression models to estimate banana plant growth and fruit yield. Existing models incorporate a range of predictor variables, growth conditions, varieties, modelling approaches and evaluation methods, which limits comparative evaluation and selection of the best model. However, the banana process-based simulation model ‘SIMBA’ and artificial neural network have proven their robust applicability to estimate banana plant growth. This review shows that there is insufficient information on mathematical models related to banana fibre yield. This review could aid stakeholders in identifying the strengths and limitations of existing models, as well as providing insight on how to build novel and reliable banana crop-related mathematical models.
This contribution marks a dual milestone at the intersection of public health law and JLME: my 50th publication of a substantive manuscript in the 50th anniversary of the Journal in 2022. In recognition of these coinciding landmarks, this installment of the Public Health Law column for JLME features observations and reflections of the field based largely on prior publications.
This study presents the main motivation to investigate the COVID-19 pandemic, a major threat to the whole world from the day when it first emerged in China city of Wuhan. Predictions on the number of cases of COVID-19 are crucial in order to prevent and control the outbreak. In this research study, an artificial neural network with rectifying linear unit-based technique is implemented to predict the number of deaths, recovered and confirmed cases of COVID-19 in Pakistan by using previous data of 137 days of COVID-19 cases from the day 25 February 2020 when the first two cases were confirmed, until 10 July 2020. The collected data were divided into training and test data which were used to test the efficiency of the proposed technique. Furthermore, future predictions have been made by the proposed technique for the next 7 days while training the model on whole available data.
Introduction: Wait time predictions have become more common in emergency departments in Canada. These estimate the wait times a patient faces to see providers and they are usually provided in an accessible way such as through an online interface. One purpose of these trackers is to improve ED system efficiency. Patients can self-triage to alternative care such as their primary care physician, defer care until a later time or could move from oversubscribed to undersubscribed EDs. However, these mechanisms could also be abused. If providers can artificially influence the wait time this may provide a possible lever to change patients flows to an ED. I investigate whether there is evidence suggestive of manipulation of online wait time trackers at an ED system in Ontario. Methods: Inputs into the wait time prediction algorithm, like patient volumes are taken from the ED EMR. This is the most likely place where staff can manipulate the wait time tracker by retaining patients in the EMR system even after they are discharged. I examine two sets of data to assess whether the online tracker displays differences in patient volumes from “true” data. The first is scraped data of patient volumes from the wait times website. The second are the accurate patient volumes from administrative data which includes when a physician discharged patients from the ED. I compare values of the true patient volumes to the online values and plot distributions of these differences. I also employ measures of accuracy such as mean square error and root mean square error to provide a value of how accurate the online data is compared to the true data. I examine these by ED and over time. Results: There are differences between the number of patients that are posted online and those in the administrative data. The distributions of these differences are skewed towards positive values suggesting that the online data more often overcounts rather than undercounts patients. Measures of accuracy increase during times when EDs are congested but do not decrease when EDs become less congested. This inaccuracy persists for a period after EDs cease to be busy. Conclusion: ED wait time trackers have the potential to be manipulated. When staff have incentive to reduce patient volumes, online data becomes more inaccurate relative to true data. This suggests that wait time trackers may have unintended consequences and that the information that they provide may not be entirely accurate.
In many situations, incentives exist to acquire knowledge and make correct political decisions. We conduct an experiment that contributes to a small but growing literature on incentives and political knowledge, testing the effect of certain and uncertain incentives on knowledge. Our experiment builds on the basic theoretical point that acquiring and using information is costly, and incentives for accurate answers will lead respondents to expend greater effort on the task and be more likely to answer knowledge questions correctly. We test the effect of certain and uncertain incentives and find that both increase effort and accuracy relative to the control condition of no incentives for accuracy. Holding constant the expected benefit of knowledge, we do not observe behavioral differences associated with the probability of earning an incentive for knowledge accuracy. These results suggest that measures of subject performance in knowledge tasks are contingent on the incentives they face. Therefore, to ensure the validity of experimental tasks and the related behavioral measures, we need to ensure a correspondence between the context we are trying to learn about and our experimental design.
A clearly defined research question allows us to formulate hypotheses that propose possible answers to that question. From these hypotheses, we can then derive specific, unambiguous predictions that allow us to test their validity with empirical data. Hypotheses and predictions serve to narrow down the infinite possibilities for data collection and determine the data we need to collect. In this chapter I cover formulating hypotheses and predictions, then explain that we often use proxies to test predictions and how practical constraints influence our thinking.