We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter describes the journey that the scientific assessment travels from author nomination through to the drafting and reviewing of the emerging report, and explores the social scientific order that structures and imprints on the IPCC’s writing of climate change through the process. It is in the initial stages of the scientific assessment, in the government nomination and author selection processes, that asymmetries in global knowledge of climate change and their effects become apparent. While developed countries have institutionalised processes for identifying and nominating experts, the majority of developing countries do not submit any author nominations. Once compiled, it is scientific conventions and measures of authority that are used to select and appoint the expertise necessary to fulfil the government approved outline of the report. However, when these activities are situated in broader patterns and practices of knowledge production, it becomes apparent that these reproduce the structures and exclusions of the existing global knowledge economy. These asymmetries are also apparent in the order of relations in the author teams and the submission of government review comments, which reduces the space for more diverse understandings and knowledges of climate change that are relevant to and reflect the interests and needs of all IPCC member governments. The IPCC has attempted to address these asymmetries through selection criteria and other mechanisms to shape the social order of authorship, which to date have proven more successful in broadening gender representation than ensuring the full participation of developing country authors in the assessment.
At the core of epidemiology is the use of quantitative methods to study health, and how it may be improved, in populations. It is important to note that epidemiology concerns not only the study of diseases but also of all health-related events. Rational health-promoting public policies require a sound understanding of causation. The epidemiological analysis of a disease or activity from a population perspective is vital in order to be able to organize and monitor effective preventive, curative and rehabilitative services. All health professionals and health-service managers need an awareness of the principles of epidemiology. They need to go beyond questions relating to individuals to challenging fundamentals such as ‘Why did this person get this disease at this time?’, ‘Is the occurrence of the disease increasing and, if so, why?’ and ‘What are the causes or risk factors for this disease?’
To effectively target public health interventions for greatest impact, it is essential that public health practitioners have a clear understanding of the populations they work with. As described in Chapter 1, understanding these populations, their health status and health needs draws on skills from several disciplines, including demography, epidemiology and statistics. Brought together, these skills allow the practitioner to understand the characteristics of the population of interest, the key health issues that it faces and the broader factors that have a particular influence on the health of that population. These broader factors are generally referred to as the wider determinants of health or the social determinants of health. Information is also vital in allowing practitioners to assess the impact of public health interventions.
We consider of constructing normal ultrafilters in extensions are here Easton support iterations of Prikry-type forcing notions. New ways presented. It turns out that, in contrast with other supports, seemingly unrelated measures or extenders can be involved here.
The identification of fraudulent and questionable research conduct is not something new. However, in the last 12 years the aim has been to identify specific problems and concrete solutions applicable to each area of knowledge. For example, previous work has focused on questionable and responsible research conducts associated with clinical assessment, measurement practices in psychology and related sciences; or applicable to specific areas of study, such as suicidology. One area of study that merits further study of questionable and responsible research behaviors is psychometrics. Focusing on psychometric research is important and necessary, as without adequate evidence of construct validity the overall validity of the research is at least debatable. Our interest here is to (a) identifying questionable research conduct specifically linked to psychometric studies; and (b) promoting greater awareness and widespread application of responsible research conduct in psychometrics research. We believe that the identification and recognition of these conducts is important and will help us to improve our daily work as psychometricians.
This chapter explains why it is critical to measure health outcomes. It includes a review of the current measurement landscape in health care in the context of the Donabedian framework for assessing health care quality. It also reorients the reader to a focus on measuring outcomes and outlines why measuring outcomes can be challenging but must be done. The chapter also provides the reader with prompts for self-reflection on their outcome measurement aspirations and describes who the intended audience is for the guide.
We first show with proofs the basic and fundamental concepts and theorems from abstract and geometric measure theory. These include, in particular, the three classical covering theorems: 4r, Besicovitch, and Vitali type. We also include a short section on probability theory: conditional expectations and Martingale Theorems. We devote quite a significant amount of space to treating Hausdorff and packing measures. In particular, we formulate and prove Frostman Converse Lemmas, which form an indispensable tool for proving that a Hausdorff or packing measure is finite, positive, or infinite. Some of these are frequently called, in particular in the fractal geometry literature, the mass redistribution principle, but these lemmas involve no mass redistribution. We then deal with Hausdorff, packing, box counting, and dimensions of sets and measures, and provide tools to calculate and estimate them.
Chapter 1 sets out the conceptual framework through which the book examines research evaluation and names the key players and processes involved. It begins by outlining The Evaluation Game’s key contention that research evaluation is a manifestation of a broader technology which the book refers to as “evaluative power.” Next, it describes how the evaluative power comes to be legitimized and how it introduces one of its main technologies: research evaluation systems. The chapter then defines games as top-down social practices and, on the basis of this conceptual framework, presents the evaluation game as a reaction to or resistance against the evaluative power. Overall, the chapter shows how the evaluation of both institutions and knowledge produced by researchers working in them have, unavoidably, become an integral element of the research process itself.
This chapter focuses on the complexities of systems and selves as they pertain to critical consciousness scholarship. By systems, we mean the multiple and intersecting systems of oppression that historically and currently operate in the United States and globally. By selves, we mean individuals’ understandings of themselves and their social locations. Our chapter in organized in three sections: (1) the complexity of systems, (2) the complexity of selves, and (3) the complexity of relationships between systems and selves. Within each section, we highlight several points about complexity, share some observations about how that complexity has been addressed within quantitative critical consciousness scholarship, and make practical recommendations for quantitative research specifically (and researchers across fields and methodological orientations more generally). We suggest that attending to these complexities will enhance scholarship and contribute to the knowledge base about how to promote socially just actions in different groups of young people and in different contexts.
Individual differences in decision making are a topic of longstanding interest, but often yield inconsistent and contradictory results. After providing an overview of individual difference measures that have commonly been used in judgment and decision-making (JDM) research, we suggest that our understanding of individual difference effects in JDM may be improved by amending our approach to studying them. We propose four recommendations for improving the pursuit of individual differences in JDM research: a more systematic approach; more theory-driven selection of measures; a reduced emphasis on main effects in favor of interactions between individual differences and decision features, situational factors, and other individual differences; and more extensive communication of results (whether significant or null, published or unpublished). As a first step, we offer our database—the Decision Making Individual Differences Inventory (DMIDI; http://html://www.sjdm.org/dmidi), a free, public resource that categorizes and describes the most common individual difference measures used in JDM research.
In response to the public's concerns about animal welfare in swine husbandry, the pig (Sus scrofa domesticus) sector introduced improved measures to focus on single rather than multiple dimensions of animal welfare concerns without accounting for their impact on public attitudes. These measures failed to improve attitudes to pig husbandry. The present study uses a more comprehensive approach by evaluating animal welfare measures in terms of their effect on animal welfare, farm income and public attitudes. Four measures were defined for each of the following societal aspects of sow husbandry: piglet mortality; tail biting and the indoor housing of gestating sows. A simulation model was developed to estimate the effects of the measures and Data Envelopment Analysis used to compare measures in terms of their effects on animal welfare, farm income and public attitudes. Only piglet mortality measures were found to have a positive effect on farm income but they showed a relatively low effect on animal welfare and public attitudes. The most efficient measure was that which included straw provision, daylight and increased group sizes for gestating sows. The level of improvement of a measure on animal welfare did not necessarily equate to the same level of improvement in public attitudes or decrease in farm income.
Rectifiable sets, measures, currents and varifolds are foundational concepts in geometric measure theory. The last four decades have seen the emergence of a wealth of connections between rectifiability and other areas of analysis and geometry, including deep links with the calculus of variations and complex and harmonic analysis. This short book provides an easily digestible overview of this wide and active field, including discussions of historical background, the basic theory in Euclidean and non-Euclidean settings, and the appearance of rectifiability in analysis and geometry. The author avoids complicated technical arguments and long proofs, instead giving the reader a flavour of each of the topics in turn while providing full references to the wider literature in an extensive bibliography. It is a perfect introduction to the area for researchers and graduate students, who will find much inspiration for their own research inside. This title is also available as open access on Cambridge Core.
Chapter 1 explains the function of Article 1 as the threshold provision defining the applicability of the Agreement on Safeguards. It explains what safeguard measures are for purposes of WTO law.
This chapter explains how the collected data can be processed with the aim of extracting meaningful measures, and how statistical analysis can be used to support significant conclusions. This chapter first introduces common quantitative measures for transparency research, including measures for tracking, privacy, fairness, and similarity. To compute most of these measures, data need to be preprocessed to extract the response variables of interest from the raw collected data, for example using simple transformations or heuristics, machine learning classifiers or natural language processing, or static and dynamic analysis methods for mobile apps. Finally, the chapter explains statistical methods that allow to make meaningful and statistically significant statements about the behavior of the response variables in the experiment.
Often it is more instructive to know 'what can go wrong' and to understand 'why a result fails' than to plod through yet another piece of theory. In this text, the authors gather more than 300 counterexamples - some of them both surprising and amusing - showing the limitations, hidden traps and pitfalls of measure and integration. Many examples are put into context, explaining relevant parts of the theory, and pointing out further reading. The text starts with a self-contained, non-technical overview on the fundamentals of measure and integration. A companion to the successful undergraduate textbook Measures, Integrals and Martingales, it is accessible to advanced undergraduate students, requiring only modest prerequisites. More specialized concepts are summarized at the beginning of each chapter, allowing for self-study as well as supplementary reading for any course covering measures and integrals. For researchers, it provides ample examples and warnings as to the limitations of general measure theory. This book forms a sister volume to René Schilling's other book Measures, Integrals and Martingales (www.cambridge.org/9781316620243).
In this paper, we present a theoretical foundation for a representation of a data set as a measure in a very large hierarchically parametrized family of positive measures, whose parameters can be computed explicitly (rather than estimated by optimization), and illustrate its applicability to a wide range of data types. The preprocessing step then consists of representing data sets as simple measures. The theoretical foundation consists of a dyadic product formula representation lemma, and a visualization theorem. We also define an additive multiscale noise model that can be used to sample from dyadic measures and a more general multiplicative multiscale noise model that can be used to perturb continuous functions, Borel measures, and dyadic measures. The first two results are based on theorems in [15, 3, 1]. The representation uses the very simple concept of a dyadic tree and hence is widely applicable, easily understood, and easily computed. Since the data sample is represented as a measure, subsequent analysis can exploit statistical and measure theoretic concepts and theories. Because the representation uses the very simple concept of a dyadic tree defined on the universe of a data set, and the parameters are simply and explicitly computable and easily interpretable and visualizable, we hope that this approach will be broadly useful to mathematicians, statisticians, and computer scientists who are intrigued by or involved in data science, including its mathematical foundations.
Different and evolving conceptualisations of perfectionism have led to the development of numerous perfectionism measures in an attempt to capture the true representations of the construct. It is, therefore, important to ensure that these instruments are valid and reliable. The present systematic review examined the literature for the psychometric properties of the most commonly used general multidimensional trait perfectionism self-report measures. Relevant studies were identified by a systematic electronic search of academic databases. A total of 349 studies were identified, with 38 of these meeting inclusion criteria. The psychometric properties presented in each of these studies were subjected to assessment using a standardised protocol. All studies were evaluated by two reviewers independently. Results indicated that while none of the included measures demonstrated adequacy across all of the nine psychometric properties assessed, most were found to possess adequate internal consistency and construct validity. The absence of evidence to support adequate measurement properties over a number of domains for the measures included in this review may be attributed to the criteria of adequacy used, with some appearing overly strict and less relevant to perfectionism measures. Clinical and research relevance of the present findings and directions for future research are discussed.
Deep-water technology and commercial interests have put the protection of underwater cultural heritage under considerable pressure in recent decades. Yet the 2001 UNESCO Convention has the potential to fend off the threat—if fully implemented. This article sets out the legislative duties States Parties have under one of the Convention's core provisions: Article 16. It requires States Parties to take a triad of legislative measures: they must enact prohibitions, impose criminal sanctions and establish corresponding jurisdiction over their nationals and vessels. In addition, the comprehensive protection of underwater cultural heritage also necessitates measures covering acts of corporate treasure hunters, even though this is not required by the Convention itself.