To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
States are measured and ranked on an ever-expanding array of country performance indicators (CPIs). Such indicators are seductive because they provide actionable, accessible, and ostensibly objective information on complex phenomena to time-pressed officials and enable citizens to hold governments to account. At the same time, a sizeable body of research has explored how CPIs entail ‘black boxing’ and depoliticisation of political phenomena. This article advances our understanding of the consequences of governance by indicators by examining how CPIs generate specific forms of politicization that can undermine a given CPI’s authority over time. We contend that CPIs rely upon two different claims to authority that operate in tension with one another: i) the claim to provide expert, objective knowledge and ii) the claim to render the world more transparent and to secure democratic accountability. Analysing CPIs in the field of education, economic governance, and health and development, we theorize and empirically document how this tension leads to three distinct forms of politicisation: scrutiny from experts that politicises the value judgements embodied in a CPI; competition whereby rival CPIs contest the objectivity of knowledge of leading CPIs; and corruption, where gaming of CPIs challenges its claim to securing transparent access to social reality. While the analysis identifies multiple paths to the politicization and undermining of specific CPIs’ authority, the article elaborates why these processes tend to leave intact and even reproduce the legitimacy of CPIs as a governance technology.
Over the past thirty years, the anti-corruption agenda has been integrated into dominant discourses of development, good governance, and democracy, reshaping political practices and knowledge production. This involved redefining concepts, operationalizing measures, and legitimizing policies. While academics have renewed focus on corruption, emphasizing global convergences and institutional designs, limited attention is given to how anti-corruption expertise is constituted and mobilized. Gaps remain in understanding the approaches shaping anti-corruption knowledge and how inequalities in knowledge production influence public policy. Recognizing its embeddedness requires examining historical roots, key actors, methods, and mobilization channels. This chapter uncovers the historical origins of anti-corruption conceptions, identifies experts by epistemological and methodological approaches, and interprets their positions. The study identifies three dominant poles of power: American academics, quantitative economists, and media-exposed practitioners. These poles reflect disparities in professional stability, autonomy, and proximity to international financial institutions. Using a historic, reflexive, and relational perspective, the investigation maps the social forces and structures shaping the field, offering insights into the production and mobilization of anti-corruption knowledge.
Unlike previous approaches to sustainable investing, focused primarily on excluding companies from problematic sectors such as tobacco, the aim of environmental, social, and governance (ESG) integration is to incorporate the assessment of ESG characteristics within mainstream investment analysis. This aim has given rise to claims that ESG integration is not about value judgments but focuses only on neutral risk–return calculations. Against such framing, this chapter argue that various ethical concerns inevitably arise when considering the quantification process underlying the generation of data used in ESG integration approaches. Drawing on the literature related to quantification and commensuration, the chapter identifies four areas in which ethical concerns can arise: (1) the strong focus on financial materiality; (2) the aggregation of disparate and often incommensurable ESG data; (3) ESG measurement problems; and (4) the treatment of ESG data as a private good. The chapter shows how quantification processes in these four areas give cause for ethical concerns related to which aspects of sustainability are rendered visible or invisible; how power relations between different field actors are structured by quantification; and which organizations have access to the opportunities that prevailing processes of quantification afford.
This leading textbook introduces students and practitioners to the identification and analysis of animal remains at archaeology sites. The authors use global examples from the Pleistocene era into the present to explain how zooarchaeology allows us to form insights about relationships among people and their natural and social environments, especially site-formation processes, economic strategies, domestication, and paleoenvironments. This new edition reflects the significant technological developments in zooarchaeology that have occurred in the past two decades, notably ancient DNA, proteomics, and isotope geochemistry. Substantially revised to reflect these trends, the volume also highlights novel applications, current issues in the field, the growth of international zooarchaeology, and the increased role of interdisciplinary collaborations. In view of the growing importance of legacy collections, voucher specimens, and access to research materials, it also includes a substantially revised chapter that addresses management of zooarchaeological collections and curation of data.
The analysis of similarities does not involve meaningless description but rather the more systematic use of Most Different Systems Designs, temporal variation and large-scale comparative designs. Parallel to this the quantification of comparative politics should progress further. Congruence analysis between data and competing theories can be a (or the) solution with N=1, but cannot be applied in settings with N>1 without risks of selection bias and limitations to multiple causation.
The quantifiers most and more than half pose a challenge to formal semantic analysis. On the one hand, their meanings seem essentially the same, prompting accounts that treat them as logically equivalent. On the other hand, their behavior diverges in a number of interesting ways. This article draws attention to some previously unnoticed contrasts between the two and develops a novel semantic analysis of them, based on principles of measurement theory. Most and more than half have logical forms that are superficially equivalent (per Hackl 2009), but that place different requirements on the structure of the underlying measurement scale: more than half requires a ratio scale, while most can be interpreted relative to an ordinal scale or one with a semiordered structure. The latter scale type is motivated by findings from psychophysics and by psychological models of humans’ approximate numerical abilities. A corpus analysis is presented that confirms the predictions of the present account.
We argue that DISTRIBUTIONAL MODIFICATION is one strategy that language affords for composing propositions about the quantity of entities that participate in a given situation. Distributional modifiers apply to kind descriptions, contributing the entailment that the kind is instantiated by a SET of tokens with a particular distribution. As a case study, we analyze frequency adjectives (FAs, e.g. occasional). We show that previous work, including our own, has suffered for focusing on the paraphrases of FAs rather than on their morphosyntax. We argue for two subclasses of FAs: those that are intersective modifiers sortally restricted to events, and those that are not. The study reinforces two novel theoretical claims in Gehrke & McNally 2011: sometimes kinds are realized by sets of tokens, rather than individual tokens; and some clauses constitute descriptions of EVENT KINDS, rather than EVENT TOKENS.
Typologically, the world's languages vary in how they express universal quantification and negative quantification. In patterns of concord, a single distributive or negative meaning is expressed redundantly on multiple morphological items. Sign languages, too, show semantic variation, but, surprisingly, this variation populates a specific corner of the full typological landscape. When we focus on manual signs, sign languages systematically have distributive concord but tend to not have negative concord in its canonical form. Here, I explain these typological facts as the reflection of an abstract, iconic bias. Recent work on distributive concord and negative concord has proposed that the phenomena can be explained in relation to the discourse referents they make available. The use of space in sign language also invites iconic inferences about the referents introduced in discourse. I show that these iconic inferences coincide with the meaning of distributive concord but contradict the meaning of negative concord. The sign language typology is thus explained based on what is easy and hard to represent in space.
Two common myths shape thinking about shipping and oceans. First, ships transport nearly everything we consume. Second, we live on planet ocean, not planet earth. Although each claim is, in one sense, correct, each is also misleading. Ships transport 80-90% of international trade (by weight), they transport only 10.8% of the economy's material footprint. Although the ocean covers 71% of the planet's surface, it makes up only 0.12% of its volume. This article queries these widely accepted numbers. Not to ‘correct’ them but to highlight the need to question the common myths that all too often guide environmental intervention.
Technical summary
Ships transport 90% of everything. The planet is 71% ocean. Environmentalists reference these statistics when they advocate ‘buying local’ to reduce shipping's environmental footprint. The shipping industry references them to argue that the industry is ‘too big to fail’ and therefore should not be overly burdened by environmental regulations; furthermore, shipping's emissions are said to be ‘too small to matter,’ considering the role the industry plays in enabling globalised consumer capitalism. Yet, this article shows that ships transport only about 10.8% of everything (by material footprint) and the planet is only 0.12% ocean (by volume). This suggests that we should employ the 90% and 71% figures with caution. Evidence demonstrates that environmental policy derived from crude quantification of an industry's significance can have unintended, and at times unwanted, consequences for the world's economy and, crucially, the planet's environment. Although we do not question the global significance of either the ocean or maritime transport, we argue that for appeals to size and scale to be useful in generating ocean consciousness and guiding policy interventions they need to be questioned every time they are invoked.
Social media summary
Ships transport 80-90% of international trade, but only 11% of the economy's material footprint. This wide gap urges us to rethink common myths about the economy and the environment.
Meinongianism (named after Alexius Meinong) is, roughly, the view that there are not only existent but also nonexistent objects. In this book, Meinong's so-called object theory as well as “neo-Meinongian” reconstructions are presented and discussed, especially with respect to logical issues, both from a historical and a systematic perspective. Among others, the following topics are addressed: basic principles and motivations for Meinongianism; the distinction between “there is” (“x”) and “exists” (“E!”); interpretations and kinds of quantification; Meinongianism, the principle of excluded middle and the principle of non-contradiction; the nuclear-extranuclear distinction and modes of predication; varieties of neo-Meinongianism and Meinongian logics.
This chapter begins to explore the impact of slave majorities and limited white migration and settlement to the tropics. This chapter starts with Barbados in the middle of the seventeenth century, showing that the island had held a substantial white majority population and that it was the most densely settled place in England’s overseas empire before a mix of disease and emigration combined with dwindling immigration led to a sharp decline in the white population. The chapter details the increasing black to white ratios at tropical sites across the colonies after the dispersal of white settlers from Barbados. The English tried to mitigate their fears of these emerging racial imbalances by turning to new modes of political arithmetic to socially engineer populations and recruit more European migrants. English colonial architects started to calculate exactly how many white settlers would be necessary to ensure the survival of the English in the tropics and counter the new crisis in political economy. These constructed metrics helped to entrench ideas about racial distinctions.
A method is developed to investigate the additive structure of data that (a) may be measured at the nominal, ordinal or cardinal levels, (b) may be obtained from either a discrete or continuous source, (c) may have known degrees of imprecision, or (d) may be obtained in unbalanced designs. The method also permits experimental variables to be measured at the ordinal level. It is shown that the method is convergent, and includes several previously proposed methods as special cases. Both Monte Carlo and empirical evaluations indicate that the method is robust.
A new procedure is discussed which fits either the weighted or simple Euclidian model to data that may (a) be defined at either the nominal, ordinal, interval or ratio levels of measurement; (b) have missing observations; (c) be symmetric or asymmetric; (d) be conditional or unconditional; (e) be replicated or unreplicated; and (f) be continuous or discrete. Various special cases of the procedure include the most commonly used individual differences multidimensional scaling models, the familiar nonmetric multidimensional scaling model, and several other previously undiscussed variants.
The procedure optimizes the fit of the model directly to the data (not to scalar products determined from the data) by an alternating least squares procedure which is convergent, very quick, and relatively free from local minimum problems.
The procedure is evaluated via both Monte Carlo and empirical data. It is found to be robust in the face of measurement error, capable of recovering the true underlying configuration in the Monte Carlo situation, and capable of obtaining structures equivalent to those obtained by other less general procedures in the empirical situation.
This chapter is written for conversation analysts and is methodological. It discusses, in a step-by-step fashion, how to code practices of action (e.g., particles, gaze orientation) and/or social actions (e.g., inviting, information seeking) for purposes of their statistical association in ways that respect conversation-analytic (CA) principles (e.g., the prioritization of social action, the importance of sequential position, order at all points, the relevance of codes to participants). As such, this chapter focuses on coding as part of engaging in basic CA and advancing its findings, for example as a tool of both discovery and proof (e.g., regarding action formation and sequential implicature). While not its main focus, this chapter should also be useful to analysts seeking to associate interactional variables with demographic, social-psychological, and/or institutional-outcome variables. The chapter’s advice is grounded in case studies of published CA research utilizing coding and statistics (e.g., those of Gail Jefferson, Charles Goodwin, and the present author). These case studies are elaborated by discussions of cautions when creating code categories, inter-rater reliability, the maintenance of a codebook, and the validity of statistical association itself. Both misperceptions and limitations of coding are addressed.
Focusing on the physics of the catastrophe process and addressed directly to advanced students, this innovative textbook quantifies dozens of perils, both natural and man-made, and covers the latest developments in catastrophe modelling. Combining basic statistics, applied physics, natural and environmental sciences, civil engineering, and psychology, the text remains at an introductory level, focusing on fundamental concepts for a comprehensive understanding of catastrophe phenomenology and risk quantification. A broad spectrum of perils are covered, including geophysical, hydrological, meteorological, climatological, biological, extraterrestrial, technological and socio-economic, as well as events caused by domino effects and global warming. Following industry standards, the text provides the necessary tools to develop a CAT model from hazard to loss assessment. Online resources include a CAT risk model starter-kit and a CAT risk modelling 'sandbox' with Python Jupyter tutorial. Every process, described by equations, (pseudo)codes and illustrations, is fully reproducible, allowing students to solidify knowledge through practice.
Rice herbicide drift poses a significant challenge in California, where rice fields are near almond, pistachio, and walnut orchards. This research was conducted as part of a stewardship program for a newly registered rice herbicide and specifically aimed to compare the onset of foliar symptoms resulting from simulated florpyrauxifen-benzyl drift with residues in almond, pistachio, and walnut leaves at several time points after exposure. Treatments were applied to one side of the canopy of 1- and 2-yr-old trees at 1/100X and 1/33X of the florpyrauxifen-benzyl rice field use rate of 29.4 g ai ha–1 in 2020 and 2021. Symptoms were observed 3 d after treatment (DAT) for pistachio and 7 DAT for almond and walnut, with peak severity at approximately 14 DAT. While almond and walnut symptoms gradually dissipated throughout the growing season, pistachio still had symptoms at leaf out in the following spring. Leaf samples were randomly collected from each tree for residue analysis at 7, 14, and 28 DAT. At 7 DAT with the 1/33X rate, almond, pistachio, and walnut leaves had florpyrauxifen-benzyl at 6.06, 5.95, and 13.12 ng g–1 fresh weight (FW) leaf, respectively. By 28 DAT, all samples from all crops treated with the 1/33X drift rate had florpyrauxifen-benzyl at less than 0.25 ng g–1 FW leaf. At the 1/100X rate, pistachio, almond, and walnut residues were 1.78, 2.31, and 3.58 ng g–1 FW leaf at 7 DAT, respectively. At 28 DAT with the 1/100X rate, pistachio and almond samples had florpyrauxifen-benzyl at 0.1 and 0.04 ng g–1 FW leaf, respectively, but walnut leaves did not have detectable residues. Together, these data suggest that residue analysis from leaf samples collected after severe symptoms may substantially underestimate actual exposure due to the relatively rapid dissipation of florpyrauxifen-benzyl in nut tree foliage.
A procedure based on loss of weight after selective dissolution analysis (SDA) and washing with (NH4)2CO3 was developed for estimating the noncrystalline material content of soils derived from widely different parent materials. After extracting with 0.2 N ammonium-oxalate or boiling 0.5 N NaOH solutions, samples were washed with 1 N (NH4)2CO3 to remove excess dissolution agents and to prevent sample dispersion. The amount of noncrystalline material removed from the sample by the extracting solution was estimated by weighing the leached products dried to constant weight at 110°C. The results match closely with those obtained by chemical analyses of the dissolution product and assignment of the appropriate water. The proposed weight-loss method is less time-consuming than the chemical method, and no assumptions need be made concerning sample homogeneity or water content of the noncrystalline material.
Extractions of whole soil and dispersed clay fractions indicated that noncrystalline material determinations on the clay fractions underestimated the noncrystalline material content for whole soils from 0 to 34%. Acid ammonium oxalate was found to be a much more selective extractant for noncrystalline materials than NaOH.
Quantification can be a double-edged sword. Converting lived experience into quantitative data can be reductive, coldly condensing complex thoughts, feelings, and actions into numbers. But it can also be a powerful tool to abstract from isolated instances into patterns and groups, providing empirical evidence of systemic injustice and grounds for collectivity. Queer lives and literatures have contended with both these qualities of quantification. Statistics have been used to pathologize queer desire as deviant from the norm, but they have also made it clear how prevalent queer people are, enabling collective action. Likewise for queer literature, which has sometimes regarded quantification as its antithesis, and other times as a prime representational resource. Across the history of queer American literature this dialectical tension between quantification as reductive and resource has played out in various ways, in conjunction with the histories of science, sexuality, and literary style. This chapter covers the history of queer quantification in literature from the singular sexological case study through the gay minority to contemporary queerness trying to transcend the countable.
We investigate whether ordinary quantification over objects is an extensional phenomenon, or rather creates non-extensional contexts; each claim having been propounded by prominent philosophers. It turns out that the question only makes sense relative to a background theory of syntax and semantics (here called a grammar) that goes well beyond the inductive definition of formulas and the recursive definition of satisfaction. Two schemas for building quantificational grammars are developed, one that invariably constructs extensional grammars (in which quantification, in particular, thus behaves extensionally) and another that only generates non-extensional grammars (and in which quantification is responsible for the failure of extensionality). We then ask whether there are reasons to favor one of these grammar schemas over the other, and examine an argument according to which the proper formalization of deictic utterances requires adoption of non-extensional grammars.