We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter brings together the argument so far, showing how nonignorable nonresponse may manifest itself and how the various models perform across these contexts, including how they may fail. It also highlights the ideal response to potential nonignorable nonresponse, which involves (1) creating randomized instruments, (2) using the randomized instrument to diagnose nonignorable nonresponse, (3) moving to conventional weights if there is no evidence of nonignorable nonresponse, but (4) using selection models explained here when there is evidence of nonignorable nonresponse. Section 11.1 simulates and analyzes data across a range of scenarios using multiple methods. Section 11.2 discusses how to diagnose whether nonresponse is nonignorable. Section 11.3 integrates the approaches with a decision tree based on properties of the data. Section 11.4 discusses how selection models can fail.
Nancy Cartwright's 1983 book How the Laws of Physics Lie argued that theories of physics often make use of idealisations, and that as a result many of these theories were not true. The present paper looks at idealisation in logic and argues that, at least sometimes, the laws of logic fail to be true. That might be taken as a kind of skepticism, but I argue rather that idealisation is a legitimate tool in logic, just as in physics, and recognising this frees logicians up to use false laws where these are helpful.
Increasing emphasis on the use of real-world evidence (RWE) to support clinical policy and regulatory decision-making has led to a proliferation of guidance, advice, and frameworks from regulatory agencies, academia, professional societies, and industry. A broad spectrum of studies use real-world data (RWD) to produce RWE, ranging from randomized trials with outcomes assessed using RWD to fully observational studies. Yet, many proposals for generating RWE lack sufficient detail, and many analyses of RWD suffer from implausible assumptions, other methodological flaws, or inappropriate interpretations. The Causal Roadmap is an explicit, itemized, iterative process that guides investigators to prespecify study design and analysis plans; it addresses a wide range of guidance within a single framework. By supporting the transparent evaluation of causal assumptions and facilitating objective comparisons of design and analysis choices based on prespecified criteria, the Roadmap can help investigators to evaluate the quality of evidence that a given study is likely to produce, specify a study to generate high-quality RWE, and communicate effectively with regulatory agencies and other stakeholders. This paper aims to disseminate and extend the Causal Roadmap framework for use by clinical and translational researchers; three companion papers demonstrate applications of the Causal Roadmap for specific use cases.
This chapter details the practical, theoretical, and philosophical aspects of experimental science. It discusses how one chooses a project, performs experiments, interprets the resulting data, makes inferences, and develops and tests theories. It then asks the question, "are our theories accurate representations of the natural world, that is, do they reflect reality?" Surprisingly, this is not an easy question to answer. Scientists assume so, but are they warranted in this assumption? Realists say "yes," but anti-realists argue that realism is simply a mental representation of the world as we perceive it, that is, metaphysical in nature. Regardless of one's sense of reality, the fact remains that science has been and continues to be of tremendous practical value. It would have to be a miracle if our knowledge and manipulation of the nature were not real. Even if they were, how do we know they are true in an absolute sense, not just relative to our own experience? This is a thorny philosophical question, the answer to which depends on the context in which it is asked. The take-home message for the practicing scientist is "never assume your results are true."
This innovation in simulation evaluated the effectiveness of a time sensible, low-cost simulation on prelicensure nursing students’ knowledge and confidence in responding to public health emergencies.
Method:
One hundred eighty-two nursing students, in groups of 5, participated in a 75-min emergency preparedness disaster simulation. A mixed methods design was used to evaluate students’ knowledge and confidence in disaster preparedness, and satisfaction with the simulation.
Results:
Students reported an increase in knowledge and confidence following the disaster simulation and satisfaction with the experience.
Conclusions:
Prelicensure nursing programs can replicate this low cost, time sensible disaster simulation to effectively educate students in emergency preparedness.
For international lawyers seeking to promote compliance with international humanitarian law (IHL), some level of affective awareness is essential – but just where one might cultivate an understanding of emotions, and at which juncture of one's career, remains a mystery. This article proposes that what the IHL lawyers and advocates of the future need is an affect-based education. More than a simple mastery of a technical set of emotional intelligence skills, what we are interested in here is the refinement of a disposition or sensibility – a way of engaging with the world, with IHL, and with humanitarianism. In this article, we consider the potential for the Jean-Pictet Competition to provide this education. Drawing on our observations of the competition and a survey with 231 former participants, the discussion examines the legal and affective dimensions of the competition, identifies the precise moments of the competition in which emotional processes take place, and probes the role of emotions in role-plays and simulations. Presenting the Jean-Pictet Competition as a form of interaction ritual, we propose that high “emotional energy” promotes a humanitarian sensibility; indeed, participant interactions have the potential to re-constitute the very concept of humanitarianism. We ultimately argue that a more conscious engagement with emotions at competitions like Pictet has the potential to strengthen IHL training, to further IHL compliance and the development of IHL rules, and to enhance legal education more generally.
The Markov True and Error (MARTER) model (Birnbaum & Wan, 2020) has three components: a risky decision making model with one or more parameters, a Markov model that describes stochastic variation of parameters over time, and a true and error (TE) model that describes probabilistic relations between true preferences and overt responses. In this study, we simulated data according to 57 generating models that either did or did not satisfy the assumptions of the True and Error fitting model, that either did or did not satisfy the error independence assumptions, that either did or did not satisfy transitivity, and that had various patterns of error rates. A key assumption in the TE fitting model is that a person’s true preferences do not change in the short time within a session; that is, preference reversals between two responses by the same person to two presentations of the same choice problem in the same brief session are due to random error. In a set of 48 simulations, data generating models either satisfied this assumption or they implemented a systematic violation, in which true preferences could change within sessions. We used the true and error (TE) fitting model to analyze the simulated data, and we found that it did a good job of distinguishing transitive from intransitive models and in estimating parameters not only when the generating model satisfied the model assumptions, but also when model assumptions were violated in this way. When the generating model violated the assumptions, statistical tests of the TE fitting models correctly detected the violations. Even when the data contained violations of the TE model, the parameter estimates representing probabilities of true preference patterns were surprisingly accurate, except for error rates, which were inflated by model violations. In a second set of simulations, the generating model either had error rates that were or were not independent of true preferences and transitivity either was or was not satisfied. It was found again that the TE analysis was able to detect the violations of the fitting model, and the analysis correctly identified whether the data had been generated by a transitive or intransitive process; however, in this case, estimated incidence of a preference pattern was reduced if that preference pattern had a higher error rate. Overall, the violations could be detected and did not affect the ability of the TE analysis to discriminate between transitive and intransitive processes.
Chapter 11 begins with a discussion of the constraints of language and how metaphor helps to overcome these constraints and expand the expressive power of language.It briefly summarizes traditional theories of metaphor comprehension, followed by conceptual metaphor theory and perceptual simulations.Extensions of conceptual metaphor theory that imply a code model are discussed and critiqued.Then I introduce and illustrate an approach to metaphor analysis that begins with the speaker’s experience as expressed in the text.The chapter discusses grammatical metaphors, metaphorical stories, playful metaphors, and multimodal metaphors.It discusses the processing and comprehension of metaphors, and closes with a discussion of the contribution of metaphor to social structure and personal identity.
There has been an explosion in the uses of multimedia and their various platforms. The proliferation of different types of technology inclusion in education has become even greater due to the increased need for remote platforms for education globally. My focus in this paper is on providing a definition of multimedia learning with simulations. There are many types of simulations and this chapter presents a framework for understanding this diversity. In particular, I discuss the multimedia principles that inform the design of simulations along with research evidence of how simulations support learning. Future directions for this research are discussed.
Collective Defined Contribution (CDC) pension schemes are a variant of collective pension plans that are present in many countries and especially common in the Netherlands. CDC schemes are based on the pooled management of the retirement savings of all members, thereby incorporating inter-generational risk-sharing features. Employers are not subject to investment and longevity risks as these are transferred to plan members collectively. In this paper, we discuss policy related to the proposed introduction of CDC schemes to the UK. By means of a simulation-based study, we compare the performance of CDC schemes vis-à-vis typical Defined Contribution schemes under different investment strategies. We find that CDC schemes may provide retirees with a higher income replacement rate on average, together with less uncertainty.
In Spain, the epidemic curve caused by COVID-19 has reached its peak in the last days of March. The implementation of the blockade derived from the declaration of the state of alarm on 14th March has raised a discussion on how and when to deal with the unblocking. In this paper, we intend to add information that may help by using epidemic simulation techniques with stochastic individual contact models and several extensions.
The regression discontinuity design (RDD) is a valuable tool for identifying causal effects with observational data. However, applying the traditional electoral RDD to the study of divided government is challenging. Because assignment to treatment in this case is the result of elections to multiple institutions, there is no obvious single forcing variable. Here, we use simulations in which we apply shocks to real-world election results in order to generate two measures of the likelihood of divided government, both of which can be used for causal analysis. The first captures the electoral distance to divided government and can easily be utilized in conjunction with the standard sharp RDD toolkit. The second is a simulated probability of divided government. This measure does not easily fit into a sharp RDD framework, so we develop a probability restricted design (PRD) which relies upon the underlying logic of an RDD. This design incorporates common regression techniques but limits the sample to those observations for which assignment to treatment approaches “as-if random.” To illustrate both of our approaches, we reevaluate the link between divided government and the size of budget deficits.
This chapter briefly surveys the field of intercultural sensitivity training programs and focuses on intercultural simulations as a method to replicate real intercultural interactions and allow participants to experience important aspects of intercultural interactions and to consider their experience in the light of the cultural and cross-cultural context of the interactions. These replications reveal the underlying dynamics and sources of perceptual and interpretative bias that may confound intercultural interaction, such as the fundamental attribution error, ethnocentric, and homogeneity biases that may produce confusion, misattribution, and even conflict. The chapter identifies numerous typologies and situates simulations in an experiential category. The chapter describes two experiential methods in some detail and provides suggestions for extracting insights made available to participants by elevating the saliency of culture to the interpretation of the interactions. Strengths and potential limitations are identified.
Recent work on legislative politics has documented complex patterns of interaction and collaboration through the lens of network analysis. In a largely separate vein of research, the field experiment—with many applications in state legislatures—has emerged as an important approach in establishing causal identification in the study of legislative politics. The stable unit treatment value assumption (SUTVA)—the assumption that a unit's outcome is unaffected by other units' treatment statuses— is required in conventional approaches to causal inference with experiments. When SUTVA is violated via networked social interaction, treatment effects spread to control units through the network structure. We review recently developed methods that can be used to account for interference in the analysis of data from field experiments on state legislatures. The methods we review require the researcher to specify a spillover model, according to which legislators influence each other, and specify the network through which spillover occurs. We discuss these and other specification steps in detail. We find mixed evidence for spillover effects in data from two previously published field experiments. Our replication analyses illustrate how researchers can use recently developed methods to test for interference effects, and support the case for considering interference effects in experiments on state legislatures.
The present study provides ranges for the magnitude of bias caused by measurement error in stunting rates, a widely used a proxy for long-term nutritional status.
Design:
Stunting, which is determined by the number of cases that fall below −2 sd from the mean height-for-age in the population, mechanically increases with higher variance. This variance stems from both natural heterogeneity in the population and measurement error. To isolate the effect of measurement error, we model the true distributions which could give rise to the observed distributions after subtracting a simulated measurement error.
Setting:
We analyse information from three rounds of the Demographic and Health Survey (DHS) in Egypt (2005, 2008 and 2014). Egypt ranks high among developing countries with low-quality anthropometric data collected in the DHS, currently the main source of anthropometry in the country.
Participants:
The study relies on re-analysis of existing DHS data, which record height, weight and age data for children under 5 years old.
Results:
Under the most conservative assumptions about measurement error, the stunting rate falls by 4 percentage points for the most recent DHS round, while assuming higher levels of measurement error reduces the stunting rate more dramatically.
Conclusions:
Researchers should be aware of and adjust for data quality concerns in calculating stunting rates for cross-survey comparisons or in communicating to policy makers.
In recent years centrist-liberal parties, such as the German Free Democratic Party (FDP) in 2013 and the British Liberal Democrats in 2015 and 2017, suffered enormous electoral defeats. These defeats highlight a prominent puzzle in the study of party competition and voting behavior; the empty center phenomenon. That is, empirical evidence suggests that most parties do not converge to the median voter's position, despite the centripetal force of the voters’ preference distribution. Using survey data from Canada, Finland, Germany and the United Kingdom, this article shows that deterioration of centrist parties’ valence image is followed by a collapse of their vote shares. Using mathematical simulations, this article shows that centrist parties have limited strategic opportunities to regain their support. Differently from other parties, centrist parties cannot alter their policy platforms to compensate for their deteriorated valence image. These results have important implications for political representation and voters–elite linkages.
Repetitive motion planning and control (RMPC) is a significant issue in the research of redundant robot manipulators. Moreover, noise from rounding error, truncation error, and robot uncertainty is an important factor that greatly affects RMPC schemes. In this study, the RMPC of redundant robot manipulators in a noisy environment is investigated. By incorporating the proportional and integral information of the desired path, a new RMPC scheme with pseudoinverse-type (P-type) formulation is proposed. Such a P-type RMPC scheme possesses the suppression of constant and bounded time-varying noises. Comparative simulation results based on a five-link robot manipulator and a PUMA560 robot manipulator are presented to further validate the effectiveness and superiority of the proposed P-type RMPC scheme over the previous one.
Metal matrix composites (MMCs) have great potential to replace monolithic metals in many engineering applications due to their enhanced properties, such as higher strength and stiffness, higher operating temperature, and better wear resistance. Despite their attractive mechanical properties, the application of MMCs has been limited primarily due to their high cost and relative low fracture toughness and reliability. Microstructure determines material fracture toughness through activation of different failure mechanisms. In this paper, a 3D multiscale modeling technique is introduced to resolve different failure mechanisms in MMCs. This approach includes 3D microstructure generation, meshing, and cohesive finite element method based failure analysis. Calculations carried out here concern Al/SiC MMCs and focus on primary fracture mechanisms which are correlated with microstructure characteristics, constituent properties, and deformation behaviors. Simulation results indicate that interface debonding not only creates tortuous crack paths via crack deflection and coalescence of microcracks but also leads to more pronounced plastic deformation, which largely contributes to the toughening of composite materials. Promotion of interface debonding through microstructure design can effectively improve the fracture toughness of MMCs.
The configurations associated with the dissociative adsorption of water on a variety of low-coordinated sites of MgO(100) surfaces, including corners, steps, MgO vacancies, and kinks on 〈010〉 steps, have been studied and assigned by combining infrared spectroscopy and ab initio calculations. Three kinds of MgO powders were examined: powders of very high specific surface area prepared by chemical vapor synthesis and well-defined cubic smoke particles obtained by combustion in either 20:80 or 60:40 O2:Ar mixtures, the latter one involving less defects and smaller particles. It appears that an imperative requirement to obtain a precise characterization of the reactive behavior of defects is to keep the samples in ultra–high vacuum conditions and to control the water partial pressure finely.