We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Mediation analysis practices in social and personality psychology would benefit from the integration of practices from statistical mediation analysis, which is currently commonly implemented in social and personality psychology, and causal mediation analysis, which is not frequently used in psychology. In this chapter, I briefly describe each method on its own, then provide recommendations for how to integrate practices from each method to simultaneously evaluate statistical inference and causal inference as part of a single analysis. At the end of the chapter, I describe additional areas of recent development in mediation analysis that that social and personality psychologists should also consider adopting I order to improve the quality of inference in their mediation analysis: latent variables and longitudinal models. Ultimately, this chapter is meant to be a kind introduction to causal inference in the context of mediation with very practical recommendations for how one can implement these practices in one’s own research.
In Chapter 2, I focus on the acquisition of number concepts related to natural numbers. I review nativist views, as well as Dehaene’s early view that number concepts arise from estimations due to the approximate numbers system. I end up focusing in most detail on the bootstrapping account of Carey and Beck, according to which the object tracking system is the key cognitive resource used in number concept acquisition. However, I endorse a hybrid account that also includes an important role for the approximate numerosity system. I then review some of the criticism against the bootstrapping account, concluding that, while more empirical data is needed to establish its correctness and details, currently it provides the most plausible account of early number concept acquisition.
Chapter 10 discusses various characteristics of the overall developmental progression of language acquisition. We first discuss some general properties of this process and then show how it can be studied both with respect to language production and language perception. We discuss the stages and milestones that children go through for different aspects of grammar and ask whether the properties and timing of stages lend support to the Innateness Hypothesis for language. We then formulate the argument from stages. Here the idea is that a complex system like language “unfolds” in the human mind, step by step, each step occurring at more or less regular points in time, as determined by a biological clock. This process of unfolding is called maturation. Just as our body gradually changes into a mature system, so does our mind. This process of unfolding is biologically determined and largely outside the control of the organism, although external factors (“nurture“) play a role. We critically evaluate the argument from stages, asking how precisely it might support the Innateness Hypothesis.
From observed data, statistical inference infers the properties of the underlying probability distribution. For hypothesis testing, the t-test and some non-parametric alternatives are covered. Ways to infer confidence intervals and estimate goodness of fit are followed by the F-test (for test of variances) and the Mann-Kendall trend test. Bootstrap sampling and field significance are also covered.
The model uncertainty issue is pervasive in virtually all applied fields but especially critical in insurance and finance. To hedge against the uncertainty of the underlying probability distribution, which we refer to as ambiguity, the worst case is often considered in quantifying the underlying risk. However, this worst-case treatment often yields results that are overly conservative. We argue that, in most practical situations, a generic risk is realized from multiple scenarios and the risk in some ordinary scenarios may be subject to negligible ambiguity so that it is safe to trust the reference distributions. Hence, we only need to consider the worst case in the other scenarios where ambiguity is significant. We implement this idea in the study of the worst-case moments of a risk in the hope to alleviate the over-conservativeness issue. Note that the ambiguity in our consideration exists in both the scenario indicator and the risk in the corresponding scenario, leading to a two-fold ambiguity issue. We employ the Wasserstein distance to construct an ambiguity ball. Then, we disentangle the ambiguity along the scenario indicator and the risk in the corresponding scenario, so that we convert the two-fold optimization problem into two one-fold problems. Our main result is a closed-form worst-case moment estimate. Our numerical studies illustrate that the consideration of partial ambiguity indeed greatly alleviates the over-conservativeness issue.
One characteristic interpretive technique in the discourse of customary international law is the identification of such norms as 'possibly emerging' or possibly in existence. Thus it is frequently asserted that a putative norm 'may' have or 'probably has' customary status. This hypothetical mode of analysis can give rise to the speculative construction of international obligations driven more by preference than by evidence. This speculative rhetorical technique is examined by reference to the account of temporal dimensions of the emergence of customary international law provided in the Chagos Archipelago Advisory Opinion of 2019. Here the International Court of Justice endeavoured to pin down the time of origin and path of evolution of a customary norm requiring territorial integrity in the context of decolonialisation as self-determination. This chapter engages with this ubiquitous characteristic of the interpretation of customary international law and argues that the accompanying opacity in relation to international legal norms – norms that are held to generate obligations – is to be deplored.
Accurate zygosity determination is a fundamental step in twin research. Although DNA-based testing is the gold standard for determining zygosity, collecting biological samples is not feasible in all research settings or all families. Previous work has demonstrated the feasibility of zygosity estimation based on questionnaire (physical similarity) data in older twins, but the extent to which this is also a reliable approach in infancy is less well established. Here, we report the accuracy of different questionnaire-based zygosity determination approaches (traditional and machine learning) in 5.5 month-old twins. The participant cohort comprised 284 infant twin pairs (128 dizygotic and 156 monozygotic) who participated in the Babytwins Study Sweden (BATSS). Manual scoring based on an established technique validated in older twins accurately predicted 90.49% of the zygosities with a sensitivity of 91.65% and specificity of 89.06%. The machine learning approach improved the prediction accuracy to 93.10%, with a sensitivity of 91.30% and specificity of 94.29%. Additionally, we quantified the systematic impact of zygosity misclassification on estimates of genetic and environmental influences using simulation-based sensitivity analysis on a separate data set to show the implication of our machine learning accuracy gain. In conclusion, our study demonstrates the feasibility of determining zygosity in very young infant twins using a questionnaire with four items and builds a scalable machine learning model with better metrics, thus a viable alternative to DNA tests in large-scale infant twin studies.
Much work has shown that differences in the timecourse of language processing are central to comparing native (L1) and non-native (L2) speakers. However, estimating the onset of experimental effects in timecourse data presents several statistical problems including multiple comparisons and autocorrelation. We compare several approaches to tackling these problems and illustrate them using an L1-L2 visual world eye-tracking dataset. We then present a bootstrapping procedure that allows not only estimation of an effect onset, but also of a temporal confidence interval around this divergence point. We describe how divergence points can be used to demonstrate timecourse differences between speaker groups or between experimental manipulations, two important issues in evaluating L2 processing accounts. We discuss possible extensions of the bootstrapping procedure, including determining divergence points for individual speakers and correlating them with individual factors like L2 exposure and proficiency. Data and an analysis tutorial are available at https://osf.io/exbmk/.
This chapter examines the problem of epistemic circularity. Can one use a source of belief, F, to support or justify the belief that F is a reliable source of true belief? We examine the views of Alson and Sosa concerning epistemic circularity.
Lastly, as learners we need to cultivate the proper method of thinking about negotiation and learning. A three-step method from medical research is presented here, which we can adopt. Paradoxically, humility is crucial for success in this ambitious endeavor.
Radical liberalism is anarchistic. And there are positive reasons to embrace anarchism from a natural law perspective. But the radical liberal must also grapple with criticisms of anarchism, including ones from within the natural law tradition itself. Chapter 9 explains why several related criticisms of anarchism are unsuccessful.
This paper describes how statistical methods can be tested on computer-generated data from known models. We explore bias and percentile tests in detail, illustrating these with examples based on insurance claims and financial time series.
A dynamic-stochastic model is developed to evaluate preferences among alternative countercyclical payment programs for representative farms producing corn or soybeans in Iowa and cotton or soybeans in Mississippi. Countercyclical payment programs are found to not necessarily be preferred to fixed payment programs.
The objective of this research is to identify and quantify the motivations for organic grain farming in the United States. Survey data of US organic grain producers were used in regression models to find the statistical determinants of three motivations for organic grain production, including profit maximization, environmental stewardship, and an organic lifestyle. Results provide evidence that many organic grain producers had more than a single motivation and that younger farmers are more likely to be motivated by environmental and lifestyle goals than older farmers. Organic grain producers exhibited a diversity of motivations, including profit and stewardship.
By adding the information of reported count data to a classical triangle of reserving data, we derive a suprisingly simple method for forecasting IBNR and RBNS claims. A simple relationship between development factors allows to involve and then estimate the reporting and payment delay. Bootstrap methods provide prediction errors and make possible the inference about IBNR and RBNS claims, separately.
U.S. state politics researchers often analyze data with observations grouped into clusters. This structure commonly produces unmodeled correlation within clusters, leading to downward bias in the standard errors of regression coefficients. Estimating robust cluster standard errors (RCSE) is a common approach to correcting this bias. However, despite their frequent use, recent work indicates that RCSE can also be biased downward. Here the author provides evidence of that bias and offers a potential solution. Through Monte Carlo simulation of an ordinary least squares (OLS) regression model, the author compares conventional standard error (OLS-SE) and RCSE performance to that of a bootstrap method that resamples clusters of observations (BCSE). The author shows that both OLS-SE and RCSE are biased downward, with OLS-SE being the most biased. In contrast, BCSE are not biased and consistently outperform the other two methods. The author concludes with three replications from recent work and offers recommendations to researchers.
A theory of conceptual development must specify the innate representational primitives, must characterize the ways in which the initial state differs from the adult state, and must characterize the processes through which one is transformed into the other. The Origin of Concepts (henceforth TOOC) defends three theses. With respect to the initial state, the innate stock of primitives is not limited to sensory, perceptual, or sensorimotor representations; rather, there are also innate conceptual representations. With respect to developmental change, conceptual development consists of episodes of qualitative change, resulting in systems of representation that are more powerful than, and sometimes incommensurable with, those from which they are built. With respect to a learning mechanism that achieves conceptual discontinuity, I offer Quinian bootstrapping. TOOC concludes with a discussion of how an understanding of conceptual development constrains a theory of concepts.
This paper considers the bootstrapping approach for measuring reserve uncertainty when applying the model of Schnieper for reserves which separate Incurred But Not Reported (IBNR) and Incurred But Not Enough Reserved (IBNER) claims. The Schnieper method has been explored in Liu and Verrall (2009), and the Mean Square Errors of Prediction (MSEP) derived. This paper takes this further by deriving the full predictive distribution, using bootstrapping. Numerical examples are provided and the MSEP from the bootstrapping approach are compared with those obtained analytically.
Monetary policy should be guided by macroeconomic models with limited nominal rigidity; ‘New Classical’ or even for some issues just plain Classical (i.e. with no nominal rigidity at all) models are perfectly adequate for understanding various aspects of the economy that have previously led economists to believe in a high degree of nominal rigidity. On UK data these models account for the facts of inflation persistence and exchange rate ‘overshooting’; their impulse responses are in line with the data; and a typical example, the Liverpool Model, is marginally accepted in its entirety by the data since 1979. Such models suggest that no increased macro instability would result from taking the rigours of monetary policy one stage further from inflation targeting and ensuring that the price level itself is returned to its long-run preset target path — so that the value of money over long periods of time would be utterly predictable.
Asymptotic expansions are obtained for the distribution function of a studentized estimator of the offspring mean sequence in an array branching process with immigration. The expansion result is shown to hold in a test function topology. As an application of this result, it is shown that the bootstrapping distribution of the estimator of the offspring mean in a sub-critical branching process with immigration also admits the same expansion (in probability). From these considerations, it is concluded that the bootstrapping distribution provides a better approximation asymptotically than the normal distribution.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.