We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This succinct introduction to the fundamental physical principles of turbulence provides a modern perspective through statistical theory, experiments, and high-fidelity numerical simulations. It describes classical concepts of turbulence and offers new computational perspectives on their interpretation based on numerical simulation databases, introducing students to phenomena at a wide range of scales. Unique, practical, multi-part physics-based exercises use realistic data of canonical turbulent flows developed by the Stanford Center for Turbulence Research to equip students with hands-on experience with practical and predictive analysis tools. Over 20 case studies spanning real-world settings such as wind farms and airplanes, color illustrations, and color-coded pedagogy support student learning. Accompanied by downloadable datasets, and solutions for instructors, this is the ideal introduction for students in aerospace, civil, environmental, and mechanical engineering and the physical sciences studying a graduate-level one-semester course on turbulence, advanced fluid mechanics, and turbulence simulation.
In most social psychological studies, researchers conduct analyses that treat participants as a random effect. This means that inferential statistics about the effects of manipulated variables address the question whether one can generalize effects from the sample of participants included in the research to other participants that might have been used. In many research domains, experiments actually involve multiple random variables (e.g., stimuli or items to which participants respond, experimental accomplices, interacting partners, groups). If analyses in these studies treat participants as the only random factor, then conclusions cannot be generalized to other stimuli, items, accomplices, partners, or groups. What are required are mixed models that allow multiple random factors. For studies with single experimental manipulations, we consider alternative designs with multiple random factors, analytic models, and power considerations. Additionally, we discuss how random factors that vary between studies, rather than within them, may induce effect size heterogeneity, with implications for power and the conduct of replication studies.
This chapter conducts a statistical analysis of nuclear latency’s political consequences. Using a design-based approach to causal inference, it determines how the onset of nuclear latency influences several foreign policy outcomes: fatal military disputes, international crises, foreign policy preferences, and US troop deployments.
The Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) Cross-Trial Statistics Group gathered lessons learned from statisticians responsible for the design and analysis of the 11 ACTIV therapeutic master protocols to inform contemporary trial design as well as preparation for a future pandemic. The ACTIV master protocols were designed to rapidly assess what treatments might save lives, keep people out of the hospital, and help them feel better faster. Study teams initially worked without knowledge of the natural history of disease and thus without key information for design decisions. Moreover, the science of platform trial design was in its infancy. Here, we discuss the statistical design choices made and the adaptations forced by the changing pandemic context. Lessons around critical aspects of trial design are summarized, and recommendations are made for the organization of master protocols in the future.
This chapter introduces the National Security Institutions Data Set, an original cross-national resource offering the first systematic measurement of national security decision-making and coordination bodies across the globe from 1946 to 2015. The chapter leverages these data to probe the theory quantitatively, yielding three findings that are consistent with the theory’s propositions. First, it shows that national security institutions are more malleable than previous scholarship has suggested. Second, it finds that integrated institutions tend to perform better than institutional alternatives. Third, it shows that institutional change is associated with domestic environments in which leaders have political incentives to weaken the bureaucracy.
The impetus for this study was a review of the Society for American Archaeology (SAA) 86th Annual Meeting program in 2021. Finding that no single poster or presentation referenced looting or antiquities trafficking despite these issues being ethical considerations that all SAA members are expected to recognize, we sought to investigate whether this was an irregularity – perhaps due to the virtual format of the meeting – or whether it was more common than not. For a broader understanding of if, how, and where these topics are discussed by archaeologists outside of the SAA, we expanded the investigation and studied the archives of 14 other archaeological and anthropological conferences. The results of the study show that despite there being an overall increase in mentioning looting and antiquities trafficking at conferences, it remains a niche and infrequently discussed topic.1
In the present study, a comparison of the thermal-insulation and mechanical performances of cement and heat-stabilized compressed earthen blocks (CEBs) was carried out to determine the factors which influence those properties. The raw clays used consist mainly of kaolinite, orthoclase and quartz. The mechanical strength increased with increase in both the amount of cement added and the firing temperature. However, the responses are better for cement-stabilized CEBs. The thermal insulation of fired bricks is greater than that of cement-stabilized bricks. This difference was related to the decrease in porosity and the formation of continuous-surface. The decrease in thermal insulation is mainly related to the formation of continuous-surface in cement-stabilized CEBs, whereas in the fired CEBs, it is due to the modification of pore volume. The mineralogy of the raw clays is statistically correlated to porosity and continuous-surface development that were confirmed as the main factors in the modification of both the mechanical strength and the thermal insulation. In cement-stabilization, the decrease in insulation is due to the development of continuous surface, while for heat-stabilization, mineral transformations during the sintering reduced continuous-surface formation and the insulation was controlled by both radiation and reduced surface conduction. The influence of the mineralogy of the raw material shows that clay content favours the insulation in fired bricks obtained at T ≤ 1000°C, while sand contents favour densification. In contrast, clay contents reduce the mechanical response of cement-stabilized material due to limited cement–clay interactions. In general, the mechanical response is more favourable in cement stabilization, while thermal insulation is better in fired bricks.
Statistical analyses of chemical data from the literature of corrensite minerals suggest a large compositional variability, more evident in octahedral than in tetrahedral coordination. Mg occupies 40–80% of the octahedral sites, with Al and Fe2+ making up the remainder. Approximately 15–30% of the tetrahedral sites are filled by Al. Despite this compositional variability, distinct fields for the several types of mixed-layer trioctahedral chlorite/trioctahedral swelling layer are not apparent. Statistical analyses of the composition of corrensite compared with saponite, vermiculite, and chlorite suggest that corrensite is an intermediate between trioctahedral chlorite and trioctahedral smectite. If Fe/(Fe + Mg) > 50%, chlorite alone is favored, but with increasing Mg, chlorite appears to transform into corrensite and then, by iron oxidation, into trioctahedral smectite. Despite the chemical variability between corrensite, chlorite, and saponite, corrensite appears chemically to be a well-defined species. On the other hand, corrensite cannot be characterized chemically on the basis of its swelling component. Thus, the current definition of corrensite as a regular 1:1 interstratification of trioctahedral chlorite and either trioctahedral smectite or vermiculite is appropriate.
The Upper Cretaceous Gammon Shale has served as both source bed and reservoir rock for accumulations of natural gas. Gas-producing and nonproducing zones in the Gammon Shale are differentiated on the basis of geophysical log interpretation. To determine the physical basis of the log responses, mineralogical, cation-exchange, textural, and chemical analyses were conducted on core samples from both producing and nonproducing portions of a well in the Gammon Shale from southwestern North Dakota. Statistical treatment (2 sample t-test and discriminant function analysis) of the laboratory data indicate that the producing and nonproducing zones differ significantly in mixed-layer clay content (7 vs. 12%), weight proportion of the clay-size (0.5-1.0 μm) fraction (5.3 vs. 6.3%) ratio of Ca2+ to Na+ extracted during ion exchange (1.4 vs. 1.0), and abundance of dolomite (10 vs. 8%). The geophysical logs apparently record subtle differences in composition and texture which probably reflect variations in the original detrital constituents of the Gammon sediments. Successfully combining log interpretation and clay petrology aids in understanding the physical basis of log response in clay-rich rocks and enhances the effectiveness of logs as predictive geologic tools.
The effective reproduction number $ R $ was widely accepted as a key indicator during the early stages of the COVID-19 pandemic. In the UK, the $ R $ value published on the UK Government Dashboard has been generated as a combined value from an ensemble of epidemiological models via a collaborative initiative between academia and government. In this paper, we outline this collaborative modelling approach and illustrate how, by using an established combination method, a combined $ R $ estimate can be generated from an ensemble of epidemiological models. We analyse the $ R $ values calculated for the period between April 2021 and December 2021, to show that this $ R $ is robust to different model weighting methods and ensemble sizes and that using heterogeneous data sources for validation increases its robustness and reduces the biases and limitations associated with a single source of data. We discuss how $ R $ can be generated from different data sources and show that it is a good summary indicator of the current dynamics in an epidemic.
Compositional data for 464 clay minerals (2:1 type) were analyzed by statistical techniques. The objective was to understand the similarities and differences between the groups and subgroups and to evaluate statistically clay mineral classification in terms of chemical parameters. The statistical properties of the distributions of total layer charge (TLC), K, VIAl, VIMg, octahedral charge (OC) and tetrahedral charge (TC) were initially evaluated. Critical-difference (P = 1%) comparisons of individual characteristics show that all the clay micas (illite, glauconite and celadonite) differ significantly from all the smectites (montmorillonite, beidellite, nontronite and saponite) only in their TLC and K levels; they cannot be distinguished by their VIAl, VIMg, TC or OC values which reveal no significant differences between several minerals.
Linear discriminant analysis using equal prior was therefore performed to analyze the combined effect of all the chemical parameters. Using six parameters [TLC, K, VIAl, VIMg, TC and OC], eight minerals groups could be derived, corresponding to the three clay micas, four smectites (mentioned above) and vermiculite. The fit between predicted and experimental values was 88.1%. Discriminant analysis using two parameters (TLC and K) resulted in classification into three broad groups corresponding to the clay micas, smectites and vermiculites (87.7% fit). Further analysis using the remaining four parameters resulted in subgroup-level classification with an 85–95% fit between predicted and experimental results. The three analyses yielded D2 Mahalanobis distances, which quantify chemical similarities and differences between the broad groups, within members of a subgroup and also between the subgroups. Classification functions derived here can be used as an aid for classification of 2:1 minerals.
Chapter 5 provides a statistical analysis of the legal rulings that comprise the book’s corpus. The analysis looks at the possible relationships between the variables that make up the corpus, including types of invocations for counsel, types of suspects (e.g., juveniles, L2 speakers), and the judges’ presidential appointments, among an array of other legal and linguistic factors, and the judges’ rulings on the legal standing of the suspects’ invocations for counsel. To frame the discussion and understand the seeming disconnect between suspects’ invocations for counsel and the application of the law, the chapter provides a description of the corpus, the entry and selection of variables, and the research questions posed. The findings of the analysis provide further evidence of the effect of judicial rulings on the suspects/defendants’ legal journey. Given the potential significant role of suspects’ statements in the conviction of a crime, this chapter also includes a discussion on whether a ruling that suppresses such statements is enough to reverse a lower court’s ruling on the use of such statements and/or its content in court.
Edited by
Cait Lamberton, Wharton School, University of Pennsylvania,Derek D. Rucker, Kellogg School, Northwestern University, Illinois,Stephen A. Spiller, Anderson School, University of California, Los Angeles
A meta-analysis is a statistical analysis that combines and contrasts two or more studies of a common phenomenon. Its emphasis is on the quantification of the heterogeneity in effects across studies, the identification of moderators of this heterogeneity, and the quantification of the association between such moderators and effects. Given this, and in line with the growing appreciation for and embracement of heterogeneity in psychological research as not a nuisance but rather a boon for advancing theory, gauging generalizability, identifying moderators and boundary conditions, and assisting in future study planning, we make the assessment of heterogeneity the focus of this chapter. Specifically, we illustrate the assessment of heterogeneity as well as the advantages offered by contemporary approaches to meta-analysis relative to the traditional approach for the assessment of heterogeneity via two case studies. Following our case studies, we review several important considerations relevant to meta-analysis and then conclude with a brief summation.
The introduction of the book has two purposes. First, it explains why a normative theory of ECJ procedural and organisational law is needed. It puts forward three reasons: first, procedural and organisational design involves making important choices on the role of courts in society; second, the dominant normative approach to assessing the ECJ’s work, namely the focus on its methods of interpretation, faces a number of conceptual problems; and third, ECJ judicial reform is of great practical relevance and requires normative anchoring. Secondly, the introduction explains the empirical strategies the book pursues to investigate the ECJ’s inner workings. In particular, it explains how requests for access to adminstrative documents and statistical analysis is used in the book to get a better understanding how the ECJ’s procedural and organisational rules are applied in practice. Finally, the introduction summarises the core of the book’s argument.
Typhoid fever is a major cause of illness and mortality in low- and middle-income settings. We investigated the association of typhoid fever and rainfall in Blantyre, Malawi, where multi-drug-resistant typhoid has been transmitting since 2011. Peak rainfall preceded the peak in typhoid fever by approximately 15 weeks [95% confidence interval (CI) 13.3, 17.7], indicating no direct biological link. A quasi-Poisson generalised linear modelling framework was used to explore the relationship between rainfall and typhoid incidence at biologically plausible lags of 1–4 weeks. We found a protective effect of rainfall anomalies on typhoid fever, at a two-week lag (P = 0.006), where a 10 mm lower-than-expected rainfall anomaly was associated with up to a 16% reduction in cases (95% CI 7.6, 26.5). Extreme flooding events may cleanse the environment of S. Typhi, while unusually low rainfall may reduce exposure from sewage overflow. These results add to evidence that rainfall anomalies may play a role in the transmission of enteric pathogens, and can help direct future water and sanitation intervention strategies for the control of typhoid fever.
The measurement and communication of the effect size of an independent variable on a dependent variable is critical to effective statistical analysis in the Social Sciences. We develop ideas about how to extend traditional methods of evaluating relationships in multivariate models to explain and illustrate the statistical power of a focal independent variable. Even with a growing acceptance of the need to report effect sizes, scholars in the management community have few well-established protocols or guidelines for reporting effect sizes. In this editorial essay, we: (1) review the necessity of reporting effect sizes; (2) discuss commonly used measures of effect size and accepted cut-offs for large, medium, and small effect sizes; (3) recommend standards for reporting effect sizes via verbal descriptions and graphical presentations; and (4) present best practice examples of reporting and discussing effect size. In summary, we provide guidance for authors on how to report and interpret effect sizes, advocating for rigor and completeness in statistical analysis.
This topic examines how demand relationships can be estimated from empirical data. The whole process of performing an empirical study is explained, starting from model specification, through the collection of data, statistical analysis and interpretation of results. The focus is on statistical analysis and the application of regression analysis using OLS. Different mathematical forms of the regression model are explained, along with the relevant transformations and interpretations. The concept of goodness of fit, and the coefficient of determination, are explained, along with their application in selecting the best model. The advantages of using multiple regression are discussed, and its implementation and interpretation. Analysis of variance (ANOVA) is explained, and how this relates to goodness of fit. The implications of empirical studies are also discussed, and the light they shed on economic theory. More advanced aspects, related to inferential statistics and hypothesis testing, are covered in an appendix, along with the assumptions involved in the classical linear regression model (CLRM) and consequences of the violation of these assumptions.
This chapter presents basic information for a wide readership on how accents differ and how those differences are analyzed, then lays out the sample of performances to be studied, the phonemes and word classes to be analyzed, and the methods of phonetic, quantitative, and statistical analysis to be followed.
The SARS-CoV-2 virus has made the largest pandemic of the 21st century, with hundreds of millions of cases and tens of millions of fatalities. Scientists all around the world are racing to develop vaccines and new pharmaceuticals to overcome the pandemic and offer effective treatments for COVID-19 disease. Consequently, there is an essential need to better understand how the pathogenesis of SARS-CoV-2 is affected by viral mutations and to determine the conserved segments in the viral genome that can serve as stable targets for novel therapeutics. Here, we introduce a text-mining method to estimate the mutability of genomic segments directly from a reference (ancestral) whole genome sequence. The method relies on calculating the importance of genomic segments based on their spatial distribution and frequency over the whole genome. To validate our approach, we perform a large-scale analysis of the viral mutations in nearly 80,000 publicly available SARS-CoV-2 predecessor whole genome sequences and show that these results are highly correlated with the segments predicted by the statistical method used for keyword detection. Importantly, these correlations are found to hold at the codon and gene levels, as well as for gene coding regions. Using the text-mining method, we further identify codon sequences that are potential candidates for siRNA-based antiviral drugs. Significantly, one of the candidates identified in this work corresponds to the first seven codons of an epitope of the spike glycoprotein, which is the only SARS-CoV-2 immunogenic peptide without a match to a human protein.
As mechanical simulations play an increasingly role in engineering projects, an appropriate integration of simulations into design-oriented product development processes is essential for efficient collaboration. To identify and overcome barriers between design and simulation departments, the BRIDGES approach was elaborated for barrier reduction in design engineering and simulation. This paper shows the industrial evaluation of the approach using a multi-method study of an online survey and focus group workshops. The experts' assessments were statistically analysed to build a connection matrix of barriers and recommendations. 59 participants from multiple industries with practical experience in the field contributed to the online survey, while 24 experts could be acquired for the focus group workshops. As a result of the workshops, both the data-based and the workshop-based part of the BRIDGES approach were assessed as beneficial to raise the efficiency of collaboration and practically applicable. This provides an empirically secured connection of barriers and suitable recommendations, allowing companies to identify and overcome collaboration barriers between design and simulation.