We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In German, it has been shown that the semantic entailments associated with telicity markers are acquired early and that speakers will turn to semantic–pragmatic principles to determine whether an overt culmination is cancellable (e.g., van Hout, 1998, 2008; Richter & van Hout, 2013; Schulz & Penner, 2002; Schulz & Ose, 2008). Here, we test the interpretation of three types of telicity markers by Portuguese L2 speakers of German, as well as Portuguese–German bilinguals and German monolinguals. A Bayesian analysis shows that Portuguese L2 speakers of German have difficulty processing telicity with resultative particles but show target-like performances with bounded DPs and adjectival markers. Our analysis also shows that bilingual and monolingual speakers display no substantial differences in their understanding of telicity entailments, albeit with some variability regarding particle markers. I argue that the existing variation may be due to effects of lexical knowledge and transparency.
Tools for analysing additive manufacturability often employ complex models that lack transparency; this impedes user understanding and has detrimental effects on the implementation of results. An expert system tool that transparently learns features for successful printing has been created. The tool uses accessible data from STL models and printer configurations to create explainable parameters and identify risks. Testing has shown good agreement to print behaviour and easy adaptability. The tool reduces the learning curves designers face in understanding design for additive manufacturing.
The dual language development of dual language immersion (DLI) students, although often examined at the domain level (e.g., listening or reading), remains understudied for more specific skills (e.g., word, sentence, or discourse). This study examines the eleven-month progression of oral language skills in a picture description task in two languages (French and English) for early-elementary (Transitional Kindergarten through first grade) DLI students (N = 42). Using Bayesian methods, which estimate parameters using both the data and prior information, we describe French and English growth patterns as measured by learning progressions whose focus is on language features at the word, sentence, and discourse levels. For French oral language, we found evidence of meaningful positive linear growth for all language features, whereas for English oral language, meaningful linear positive growth was only detected for sophistication of topic vocabulary. Overall, coming from a French-speaking household was associated with steeper French oral language trajectories, but coming from an English-only household did not specifically impact English oral language trajectories. In both languages, grade level influenced the trajectories of some—but not all—features. We conclude with theoretical and practical implications, advocating for a language progression approach in instruction and research on bilingualism.
Despite their increasing popularity, n-of-1 designs employ data analyses that might not be as complete and powerful as they could be. Borrowing from existing advances in educational and psychological research, this article presents a few techniques and references for rigorous data analytic techniques in n-of-1 research.
Does household debt affect the size of the fiscal multiplier? We investigate the effects of household debt on government spending multipliers using a smooth transition vector autoregression model. Through generalized impulse response functions, we measure whether the effect of government spending on GDP is conditioned by different levels of household debt in Australia, Sweden, and Norway, three countries with high levels of household indebtedness, and in the world’s seven largest economies. Our results indicate that the short-term effects of government spending tend to be higher if fiscal expansion takes place during periods of low household debt. On average, the fiscal multiplier (on impact) is 0.70, 0.61, and 0.79 (percent of GDP) larger when the increase in government spending takes place during periods of low household debt for Australia, Norway, and the United States.
Understanding historical environmental determinants associated with the risk of elevated marine water contamination could enhance monitoring marine beaches in a Canadian setting, which can also inform predictive marine water quality models and ongoing climate change preparedness efforts. This study aimed to assess the combination of environmental factors that best predicts Escherichia coli (E. coli) concentration at public beaches in Metro Vancouver, British Columbia, by combining the region’s microbial water quality data and publicly available environmental data from 2013 to 2021. We developed a Bayesian log-normal mixed-effects regression model to evaluate predictors of geometric E. coli concentrations at 15 beaches in the Metro Vancouver Region. We identified that higher levels of geometric mean E. coli levels were predicted by higher previous sample day E. coli concentrations, higher rainfall in the preceding 48 h, and higher 24-h average air temperature at the median or higher levels of the 24-h mean ultraviolet (UV) index. In contrast, higher levels of mean salinity were predicted to result in lower levels of E. coli. Finally, we determined that the average effects of the predictors varied highly by beach. Our findings could form the basis for building real-time predictive marine water quality models to enable more timely beach management decision-making.
For this book, we assume you’ve had an introductory statistics or experimental design class already! This chapter is a mini refresher of some critical concepts we’ll be using and lets you check you understand them correctly. The topics include understanding predictor and response variables, the common probability distributions that biologists encounter in their data, the common techniques, particularly ordinary least squares (OLS) and maximum likelihood (ML), for fitting models to data and estimating effects, including their uncertainty. You should be familiar with confidence intervals and understand what hypothesis tests and P-values do and don’t mean. You should recognize that we use data to decide, but these decisions can be wrong, so you need to understand the risk of missing important effects and the risk of falsely claiming an effect. Decisions about what constitutes an “important” effect are central.
Archaeologists keep a limited arsenal of methods for dating stone features at alpine sites. Radiocarbon (14C) dating is rarely possible, and it is common that dates do not accurately represent the activity of interest (stone feature construction). In this paper I review a legacy set of 89 14C dates for stone driveline sites built by hunter-gatherers in Colorado’s Southern Rocky Mountains. I amend the sample of dates using chronometric hygiene and focus on dates with direct association to hunting features. I then present a newly calibrated set of 29 lichenometric dates for rock features at these sites and use hygiene protocols to remove inaccurate dates. Size-frequency lichenometry, though poorly known in archaeology, provides a way to date stone features indirectly by measuring the growth of long-lived lichens that colonize rock surfaces after construction events. Bayesian modeling of the combined set of dates suggests that the tradition of alpine game driving spans over 6000 years BP, with abundant use over the last 2000 years. Archaeologists must use multiple methods for dating stone features in alpine environments. This Bayesian analysis is a formal effort to combine lichenometry and 14C dating for archaeological interpretation.
Edited by
Alik Ismail-Zadeh, Karlsruhe Institute of Technology, Germany,Fabio Castelli, Università degli Studi, Florence,Dylan Jones, University of Toronto,Sabrina Sanchez, Max Planck Institute for Solar System Research, Germany
Abstract: This chapter provides a broad introduction to Bayesian data assimilation that will be useful to practitioners in interpreting algorithms and results, and for theoretical studies developing novel schemes with an understanding of the rich history of geophysical data assimilation and its current directions. The simple case of data assimilation in a ‘perfect’ model is primarily discussed for pedagogical purposes. Some mathematical results are derived at a high level in order to illustrate key ideas about different estimators. However, the focus of this chapter is on the intuition behind these methods, where more formal and detailed treatments of the data assimilation problem can be found in the various references. In surveying a variety of widely used data assimilation schemes, the key message of this chapter is how the Bayesian analysis provides a consistent framework for the estimation problem and how this allows one to formulate its solution in a variety of ways to exploit the operational challenges in the geosciences.
Deciding whether or not eradication of an invasive species has been successful is one of the main dilemmas facing managers of eradication programmes. When the species is no longer being detected, a decision must be made about when to stop the eradication programme and declare success. In practice, this decision is usually based on ad hoc rules, which may be inefficient. Since surveillance undertaken to confirm species absence is imperfect, any declaration of eradication success must consider the risk and the consequences of being wrong. If surveillance is insufficient, then eradication may be falsely declared (a Type I error), whereas continuation of surveillance when eradication has already occurred wastes resources (a Type II error). We review the various methods that have been developed for quantifying these errors and incorporating them into the decision-making process. We conclude with an overview of future developments likely to improve the practice of determining invasive species eradication success.
True and Error Theory (TET) is a modern latent variable modeling approach for analyzing sets of preferences held by people. Individual True and Error Theory (iTET) allows researchers to estimate the proportion of the time an individual truly holds a particular underlying set of preferences without assuming complete response independence in a repeated measures experimental design. iTET is thus suitable for investigating research questions such as whether an individual ever is truly intransitive in their preferences (i.e., they prefer a to b, b to c, and c to a). While current iTET analysis methods provide the means of investigating such questions they require a lot of data to achieve satisfactory power for hypothesis tests of interest. This paper overviews the performance and shortcomings of the current analysis methods in efficiently using data, while providing new analysis methods that offer substantial gains in power and efficiency.
Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed.
Bayesian statistics offers a normative description for how a person should combine their original beliefs (i.e., their priors) in light of new evidence (i.e., the likelihood). Previous research suggests that people tend to under-weight both their prior (base rate neglect) and the likelihood (conservatism), although this varies by individual and situation. Yet this work generally elicits people’s knowledge as single point estimates (e.g., x has a 5% probability of occurring) rather than as a full distribution. Here we demonstrate the utility of eliciting and fitting full distributions when studying these questions. Across three experiments, we found substantial variation in the extent to which people showed base rate neglect and conservatism, which our method allowed us to measure for the first time simultaneously at the level of the individual. While most people tended to disregard the base rate, they did so less when the prior was made explicit. Although many individuals were conservative, there was no apparent systematic relationship between base rate neglect and conservatism within each individual. We suggest that this method shows great potential for studying human probabilistic reasoning.
There are a variety of approaches to biventricular repair in neonates and infants with adequately sized ventricles and left-sided obstruction in the presence of a ventricular septal defect. Those who undergo this in a staged manner initially undergo a Norwood procedure followed by a ventricular septal defect closure such that the neo-aorta is entirely committed to the left ventricle and placement of a right ventricular to pulmonary artery conduit (Yasui operation). This study aimed to determine clinical and haemodynamic factors upon paediatric cardiac ICU admission immediately after the two-stage Yasui operation that was associated with post-operative length of stay.
Methods:
This was a retrospective review of patients who underwent the Yasui procedure after the initial Norwood operation between 1 January 2011 and 31 December 2020. Patients with complete data on admission were identified and analysed using Bayesian regression analysis.
Results:
A total of 15 patients were included. The median age was 9.0 months and post-operative length of stay was 6days. Bayesian regression analysis demonstrated that age, weight, heart rate, mean arterial blood pressure, central venous pressure, pulse oximetry, cerebral near infrared spectroscopy, renal near infrared spectroscopy, pH, pCO2, ionised calcium, and serum lactate were all associated with post-operative length of stay.
Conclusion:
Discrete clinical and haemodynamic factors upon paediatric cardiac ICU admission after staged Yasui completion are associated with post-operative length of stay. Clinical target ranges can be developed and seem consistent with the notion that greater systemic oxygen delivery is associated with lower post-operative length of stay.
The paper investigates the validity of individual perceptions of heart disease risks, and examines how information and risk perceptions affect marginal willingness to pay (MWTP) to reduce risk, using data from a stated preference survey. Results indicate that risk perceptions held before receiving risk information are plausibly related to objective risk factors and reflect individual-specific information not found in aggregate measures of objective risk. After receiving information, individuals’ updates of prior risk assessments are broadly consistent with Bayesian learning. Perceived heart disease risks thus satisfy construct validity and provide a valid basis for inferring MWTP to reduce risk. Consistent estimators of the relationship of MWTP to endogenously perceived risk are developed. Estimating MWTP based on objective rather than subjective risks causes misleading inferences about benefits of risk reduction. An empirical case study shows that estimated benefits may be as much as 60–98 % higher when estimated using individuals’ heterogeneous perceptions of risk than when using aggregate estimates of objective risk. The main contributions include assessing the validity of risk perceptions and their updating, consistently estimating the relationship between MWTP and endogenously perceived risk, and demonstrating the importance of employing risk perception information for accurate benefit measurement.
Mappings play an important role in environmental science applications by allowing practitioners to monitor changes at national and global scales. Over the last decade, it has become increasingly popular to use satellite imagery data and machine learning techniques (MLTs) to construct such maps. Given the black-box nature of many of these MLTs though, quantifying uncertainty in these maps often relies on sampling reference data under stricter conditions. However, practical constraints can sampling such data expensive, which forces stakeholders to make a trade-off between the degree of uncertainty in predictions and the costs of collecting appropriately sampled reference data. Furthermore, quantifying any trade-off is often difficult, as it will depend on many interdependent factors that cannot be fully understood until more data is collected. This paper investigates how a combination of Bayesian inference and an adaptive approach to sampling reference data can offer a generalizable way of managing such trade-offs. The approach is illustrated and evaluated using a woodland mapping of England as a case study in which reference data is collected under constraints motivated by COVID-19 travel restrictions. The key findings of this paper are as follows: (a) an adaptive approach to sampling reference data allows an informed approach when quantifying this trade-off; and (b) Bayesian inference is naturally suited to adaptive sampling and can make use of Monte Carlo methods when dealing with more advanced problems and analytical techniques.
Patients with anorexia nervosa (AN) show impaired decision-making ability, but it is still unclear if this is a trait marker (i.e., being associated with AN at any stage of the disease) or a state parameter of the disease (i.e., being present only in acutely ill patients), and if it has endophenotypic characteristics. The aim of this study was to determine the endophenotypic, and state- or trait-associated nature of decision-making impairment in AN.
Methods
Ninety-one patients with acute AN (A-AN), 90 unaffected relatives (UR), 23 patients remitted from AN (R-AN), and 204 healthy controls (HC) carried out the Iowa gambling task (IGT). Prospective valence learning (PVL) model was employed to distinguish the cognitive dimensions underlying the decision-making process, that is, learning, consistency, feedback sensitivity, and loss aversion. IGT performance and decision-making dimensions were compared among groups to assess whether they had endophenotypic (i.e., being present in A-AN, UR, and R-AN, but not in HC) and/or trait-associated features (i.e., present in A-AN and R-AN but not in HC).
Results
Patients with A-AN had lower performance at the IGT (p < 0.01), while UR, R-AN, and HC had comparable results. PVL-feedback sensitivity was lower in patients with R-AN and A-AN than in HC (p < 0.01).
Conclusions
Alteration of decision-making ability did not show endophenotypic features. Impaired decision-making seems a state-associated characteristic of AN, resulting from the interplay between trait-associated low feedback sensitivity and state-associated features of the disease.
This paper proposes a Bayesian multilevel spatio-temporal model with a time-varying spatial autoregressive coefficient to estimate temporally heterogeneous network interdependence. To tackle the classic reflection problem, we use multiple factors to control for confounding caused by latent homophily and common exposures. We develop a Markov Chain Monte Carlo algorithm to estimate parameters and adopt Bayesian shrinkage to determine the number of factors. Tests on simulated and empirical data show that the proposed model improves identification of network interdependence and is robust to misspecification. Our method is applicable to various types of networks and provides a simpler and more flexible alternative to coevolution models.
What is the impact of uncommon but notable violent acts on conflict dynamics? We analyze the impact of the murder of a Palestinian child on the broader dynamics of Israeli-Palestinian violence in Jerusalem. By using novel micro-level event data and utilizing Discrete Fourier Transform and Bayesian Poisson Change Point Analysis, we compare the impact of the murder to that of other lethal but more typical Israeli-Palestinian events. We demonstrate that the murder had a large and durable impact on the average number of daily riots in Jerusalem, whereas the other events caused smaller, short-term effects. We demonstrate that scholars should devote more attention to the analysis of atypical violent acts and indicate a set of tools for conducting such analyses.
The knowledge of genetic parameters of performance traits is crucial for any breeding programme in dairy animals. The present study was conducted to use a Bayesian approach for estimation of genetic parameters of production and reproduction traits in Jersey crossbred cattle. Data of Jersey crossbred cattle maintained at Eastern Regional Station, National Dairy Research Institute, West Bengal spread over a span of 41 years were utilized. The marginal posterior medians of heritability for 305-day milk yield (305MY), total milk yield (TMY), peak yield (PY), lactation length (LL), calving interval (CI), total milk yield per day of lactation length (TMY/LL) and total milk yield per day of calving interval (TMY/CI) were 0.31 ± 0.07, 0.29 ± 0.07, 0.27 ± 0.06, 0.16 ± 0.05, 0.15 ± 0.05, 0.29 ± 0.06, 0.27 ± 0.06, respectively. Moderate heritability estimates for 305MY, TMY, PY and production efficiency traits indicate the presence of adequate additive genetic variance in these traits to respond to selection combined with better herd management. Repeatability estimates for 305MY, TMY, PY, LL, CI, TMY/LL and TMY/CI were 0.57 ± 0.08, 0.58 ± 0.08, 0.51 ± 0.07, 0.34 ± 0.06, 0.31 ± 0.06, 0.54 ± 0.07 and 0.49 ± 0.07, respectively. Repeatability estimates for 305MY, TMY and PY were high in the current study, suggesting the use of first lactation records for early evaluation of Jersey crossbred cattle for future selection. Genetic correlations varied from 0.21 to 0.97 and maximum genetic correlation was observed between 305MY and TMY indicating that consideration of 305MY instead of TMY in breeding programmes would suffice. Positive genetic correlations of CI with 305MY and TMY indicated the antagonistic association between production and reproduction traits.