We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter begins to explore the impact of slave majorities and limited white migration and settlement to the tropics. This chapter starts with Barbados in the middle of the seventeenth century, showing that the island had held a substantial white majority population and that it was the most densely settled place in England’s overseas empire before a mix of disease and emigration combined with dwindling immigration led to a sharp decline in the white population. The chapter details the increasing black to white ratios at tropical sites across the colonies after the dispersal of white settlers from Barbados. The English tried to mitigate their fears of these emerging racial imbalances by turning to new modes of political arithmetic to socially engineer populations and recruit more European migrants. English colonial architects started to calculate exactly how many white settlers would be necessary to ensure the survival of the English in the tropics and counter the new crisis in political economy. These constructed metrics helped to entrench ideas about racial distinctions.
This chapter is written for conversation analysts and is methodological. It discusses, in a step-by-step fashion, how to code practices of action (e.g., particles, gaze orientation) and/or social actions (e.g., inviting, information seeking) for purposes of their statistical association in ways that respect conversation-analytic (CA) principles (e.g., the prioritization of social action, the importance of sequential position, order at all points, the relevance of codes to participants). As such, this chapter focuses on coding as part of engaging in basic CA and advancing its findings, for example as a tool of both discovery and proof (e.g., regarding action formation and sequential implicature). While not its main focus, this chapter should also be useful to analysts seeking to associate interactional variables with demographic, social-psychological, and/or institutional-outcome variables. The chapter’s advice is grounded in case studies of published CA research utilizing coding and statistics (e.g., those of Gail Jefferson, Charles Goodwin, and the present author). These case studies are elaborated by discussions of cautions when creating code categories, inter-rater reliability, the maintenance of a codebook, and the validity of statistical association itself. Both misperceptions and limitations of coding are addressed.
Focusing on the physics of the catastrophe process and addressed directly to advanced students, this innovative textbook quantifies dozens of perils, both natural and man-made, and covers the latest developments in catastrophe modelling. Combining basic statistics, applied physics, natural and environmental sciences, civil engineering, and psychology, the text remains at an introductory level, focusing on fundamental concepts for a comprehensive understanding of catastrophe phenomenology and risk quantification. A broad spectrum of perils are covered, including geophysical, hydrological, meteorological, climatological, biological, extraterrestrial, technological and socio-economic, as well as events caused by domino effects and global warming. Following industry standards, the text provides the necessary tools to develop a CAT model from hazard to loss assessment. Online resources include a CAT risk model starter-kit and a CAT risk modelling 'sandbox' with Python Jupyter tutorial. Every process, described by equations, (pseudo)codes and illustrations, is fully reproducible, allowing students to solidify knowledge through practice.
Rice herbicide drift poses a significant challenge in California, where rice fields are near almond, pistachio, and walnut orchards. This research was conducted as part of a stewardship program for a newly registered rice herbicide and specifically aimed to compare the onset of foliar symptoms resulting from simulated florpyrauxifen-benzyl drift with residues in almond, pistachio, and walnut leaves at several time points after exposure. Treatments were applied to one side of the canopy of 1- and 2-yr-old trees at 1/100X and 1/33X of the florpyrauxifen-benzyl rice field use rate of 29.4 g ai ha–1 in 2020 and 2021. Symptoms were observed 3 d after treatment (DAT) for pistachio and 7 DAT for almond and walnut, with peak severity at approximately 14 DAT. While almond and walnut symptoms gradually dissipated throughout the growing season, pistachio still had symptoms at leaf out in the following spring. Leaf samples were randomly collected from each tree for residue analysis at 7, 14, and 28 DAT. At 7 DAT with the 1/33X rate, almond, pistachio, and walnut leaves had florpyrauxifen-benzyl at 6.06, 5.95, and 13.12 ng g–1 fresh weight (FW) leaf, respectively. By 28 DAT, all samples from all crops treated with the 1/33X drift rate had florpyrauxifen-benzyl at less than 0.25 ng g–1 FW leaf. At the 1/100X rate, pistachio, almond, and walnut residues were 1.78, 2.31, and 3.58 ng g–1 FW leaf at 7 DAT, respectively. At 28 DAT with the 1/100X rate, pistachio and almond samples had florpyrauxifen-benzyl at 0.1 and 0.04 ng g–1 FW leaf, respectively, but walnut leaves did not have detectable residues. Together, these data suggest that residue analysis from leaf samples collected after severe symptoms may substantially underestimate actual exposure due to the relatively rapid dissipation of florpyrauxifen-benzyl in nut tree foliage.
A procedure based on loss of weight after selective dissolution analysis (SDA) and washing with (NH4)2CO3 was developed for estimating the noncrystalline material content of soils derived from widely different parent materials. After extracting with 0.2 N ammonium-oxalate or boiling 0.5 N NaOH solutions, samples were washed with 1 N (NH4)2CO3 to remove excess dissolution agents and to prevent sample dispersion. The amount of noncrystalline material removed from the sample by the extracting solution was estimated by weighing the leached products dried to constant weight at 110°C. The results match closely with those obtained by chemical analyses of the dissolution product and assignment of the appropriate water. The proposed weight-loss method is less time-consuming than the chemical method, and no assumptions need be made concerning sample homogeneity or water content of the noncrystalline material.
Extractions of whole soil and dispersed clay fractions indicated that noncrystalline material determinations on the clay fractions underestimated the noncrystalline material content for whole soils from 0 to 34%. Acid ammonium oxalate was found to be a much more selective extractant for noncrystalline materials than NaOH.
Quantification can be a double-edged sword. Converting lived experience into quantitative data can be reductive, coldly condensing complex thoughts, feelings, and actions into numbers. But it can also be a powerful tool to abstract from isolated instances into patterns and groups, providing empirical evidence of systemic injustice and grounds for collectivity. Queer lives and literatures have contended with both these qualities of quantification. Statistics have been used to pathologize queer desire as deviant from the norm, but they have also made it clear how prevalent queer people are, enabling collective action. Likewise for queer literature, which has sometimes regarded quantification as its antithesis, and other times as a prime representational resource. Across the history of queer American literature this dialectical tension between quantification as reductive and resource has played out in various ways, in conjunction with the histories of science, sexuality, and literary style. This chapter covers the history of queer quantification in literature from the singular sexological case study through the gay minority to contemporary queerness trying to transcend the countable.
We investigate whether ordinary quantification over objects is an extensional phenomenon, or rather creates non-extensional contexts; each claim having been propounded by prominent philosophers. It turns out that the question only makes sense relative to a background theory of syntax and semantics (here called a grammar) that goes well beyond the inductive definition of formulas and the recursive definition of satisfaction. Two schemas for building quantificational grammars are developed, one that invariably constructs extensional grammars (in which quantification, in particular, thus behaves extensionally) and another that only generates non-extensional grammars (and in which quantification is responsible for the failure of extensionality). We then ask whether there are reasons to favor one of these grammar schemas over the other, and examine an argument according to which the proper formalization of deictic utterances requires adoption of non-extensional grammars.
The quantification of the relative mineralogical composition of clay mixtures by powder X-ray diffraction or chemical mass balance methods has been severely hampered by a lack of representative standards. The recent development of elemental mass balance models that do not require standards for all minerals in the mixture may help circumvent this problem. These methods, which are based on the numerical optimization of systems of non-linear equations using the Marquardt algorithm, show promise for mineral quantification. The objective of this study is to make a preliminary assessment of the accuracy of these methods and to compare them to linear models that require standards for all mineral phases. Methods 1 and 2 are based on weighted average solutions to simultaneous linear equations solved for single samples with known standards. Solutions were achieved by a matrix decomposition algorithm and the Marquardt algorithm, respectively. Methods 3 and 4 are based on a set of simultaneous non-linear equations with reduced non-linearity solved by least squares optimization based on the Marquardt algorithm for multiple samples. Illite and halloysite compositions were fixed in Method 3, only the halloysite composition was fixed in Method 4. All models yielded relative weight fractions of the three mineral components; additionally, Methods 3 and 4 yielded compositions of smectite, and smectite and illite, respectively. Ten clay mixtures with varying proportions of the <0.2 μm size fraction of three different reference clays (Wyoming bentonite, Fithian illite, and New Bedford halloysite) were prepared gravi-metrically and analyzed by inductively coupled plasma-atomic emission spectroscopy. Accuracy of the four methods was evaluated by comparing the known mineralogical compositions of the mixtures with those predicted by the models. Relative errors of 5 and 10% (randomly +/-) were imposed on the elemental composition of the smectite standard to simulate errors due to lack of good standards. Not surprisingly, the accuracy of Methods 1 and 2 decreased rapidly with increasing error. Because Methods 3 and 4 optimized for the smectite composition and only used it for an initial guess, they were unaffected by the level of introduced error. They accurately quantified the mineralogical compositions of the mixtures and the elemental compositions of smectite, and smectite and illite, respectively.
The rapid phase quantification method using X-ray diffraction (XRD) with a position-sensitive detector (PSD), outlined by Cressey and Schofield (1996), has been extended to facilitate mineral phase quantification of clay-bearing samples. In addition, correction factors for differences in matrix absorption effects have been calculated and applied. The method now enables mudrock mineralogy to be quantified rapidly and efficiently. Using this approach overcomes many of the problems hitherto associated with the quantitative analysis of clay minerals, in particular the effects of preferred orientation of crystallites and variable sample-area irradiation, that make the task of quantification extremely difficult by conventional Bragg-Brentano scanning diffractometry.
The model theory for quantified relevant logic developed by Robert Goldblatt and Mares is adapted to the present semantical framework. A universally quantified statement ‘For all x A(x)’ is taken to mean that there is some proposition in the present theory that entails every instance of A(x). An axiomatization of the logic is given, and completeness is proven in the Appendix to the book. Identity, the nature of domains, and higher-order quantification are also discussed.
Allophane is a very fine-grained clay mineral which is especially common in Andosols. Its importance in soils derives from its large reactive surface area. Owing to its short-range order, allophane cannot be quantified by powder X-ray diffraction (XRD) directly. It is commonly dissolved from the soil by applying extraction methods. In the present study the standard extraction method (oxalate) was judged to be unsuitable for the quantification of allophane in a soil/clay deposit from Ecuador, probably because of the large allophane content (>60 wt.%). This standard extraction method systematically underestimated the allophane content but the weakness was less pronounced in samples with small allophane contents. In the case of allophane-rich materials, the Rietveld XRD technique, using an internal standard to determine the sum of X-ray amorphous phases, is recommended if appropriate structural models are available for the other phases present in the sample. The allophane (+imogolite) content is measured by subtracting the amount of oxalate-soluble phases (e.g. ferrihydrite). No correction would be required if oxalate-soluble Fe were incorporated in the allophane structure. The present study, however, provides no evidence for this hypothesis. Mössbauer and scanning electron microscopy investigations indicate that goethite and poorly ordered hematite are the dominant Fe minerals and occur as very fine grains (or coatings) being dispersed in the cloud-like allophane aggregates.
Allophane is known to adsorb appreciable amounts of water, depending on ambient conditions. The mass fraction of the sample attributed to this mineral thus changes accordingly; the choice of a reference hydration state is, therefore, a fundamental factor in the quantification of allophane in a sample. Results from the present study revealed that (1) drying at 105ºC produced a suitable reference state, and (2) water adsorption has no effect on quantification by XRD analysis.
Thirty six bentonite samples from 16 different locations were examined in order to demonstrate the applicability of a new Rietveld description approach for quantitative phase analysis. X-ray diffraction patterns of the bulk material were obtained and analyzed by the Rietveld method. The samples contain up to ten different minerals, with dioctahedral smectite as the major component. A model for turbostratic disorder of smectites was formulated inside a structure-description file of the Rietveld program BGMN. The quality of the refinements was checked using an internal standard mineral (10.0 or 20.0 wt.% corundum) and by cross-checking results with X-ray fluorescence (XRF) data. The corundum content was reproduced with only small deviations from the nominal values. A comparison of the chemical composition obtained by XRF and the composition as re-calculated from quantitative Rietveld results shows a satisfactory agreement, although X-ray amorphous components such as volcanic glasses were not considered. As a result of this study, the Rietveld method combined with the new structure model for turbostratic disorder has proven to be a suitable method for routine quantitative analysis of bentonites with smectites as the dominant clay minerals.
Reflectance spectroscopy is a rapid and non-destructive method that can be used to detect organic compounds in geologic samples over a wide range of spatial scales that includes outcrops, hand samples, drill cores, and planetary surfaces. In order to assess the viability of this technique for quantification of organics and aliphatic compounds in particular, the present study examines how clay mineralogy, water content, and albedo influence the strength of organic absorptions in near-infrared (NIR) reflectance spectra. The effects of clay structure and water content are evaluated using kaolinite, smectite (montmorillonite), and a mixed-layer illite-smectite as starting materials. Absorption strengths for C—H absorptions are compared to known total organic carbon (TOC) values using both reflectance spectra and single scattering albedo (SSA) spectra derived from a Hapke radiative transfer model. A linear relationship was observed between band depth and TOC for each sample suite, but strong albedo variation led to non-unique trends when band depths were calculated from reflectance spectra. These effects were minimized by conversion to SSA, for which band depth-TOC trends were similar for all mixture suites regardless of albedo or hydration level, indicating that this approach may be more broadly applicable for clay and organic-bearing samples. Extrapolation of band depth-TOC trends for the synthetic mixtures suggested a very conservative lower limit of detection of <1 wt.% TOC, but preliminary results for natural organic-bearing shales indicated that detection limits may be an order of magnitude lower.
Chapter 5 is concerned with Articles 22.6 DSU and 4.10 SCM Agreement – namely, those provisions that allow one Member to bring retaliatory measures against another Member where that other Member fails to bring its measure into compliance with the covered agreement(s). The chapter argues that, although not reflected in current jurisprudence, there should be a need to demonstrate the causal link between a Member’s failure to bring its measure into conformity with a DSB ruling and the level of nullification or impairment incurred (that is, a causal link analysis). The chapter also raises parts of the jurisprudence that would seem to support a non-attribution analysis being used in this context, too. The chapter then puts forward the Non-attribution/Causal Link Analysis as one method for performing these analyses.
This paper presents a language, Alda, that supports all of logic rules, sets, functions, updates, and objects as seamlessly integrated built-ins. The key idea is to support predicates in rules as set-valued variables that can be used and updated in any scope, and support queries using rules as either explicit or implicit automatic calls to an inference function. We have defined a formal semantics of the language, implemented a prototype compiler that builds on an object-oriented language that supports concurrent and distributed programming and on an efficient logic rule system, and successfully used the language and implementation on benchmarks and problems from a wide variety of application domains. We describe the compilation method and results of experimental evaluation.
I introduce the question of what digital technology is – why it is so powerful, why it is different in kind from other media – by focusing on the convergence of two themes. First: Regarded as a tool, digital technology does not really do anything; it does not straightforwardly act on the material world. Unlike steam engines or wheels, it performs no particular physical task. Its scope of action is unspecified and unrestricted. Second: It is a social technology. Its appeal does not reside only in its instrumental means and uses, but in its power to connect us into a single global nervous system. This is in part because information technology is a technology of intentional responses. By responding to us in terms of information and by making our responses themselves the subject of measurement, digital devices engage us personally and in kind. These features thus represent the twin promises of total effectiveness and total responsiveness; that is, digital technology is both a new tool for measurement and a new medium of being in touch with others. It is this synthesis of measurement with medium – of quantitative analysis with social reality – that is most distinctive and transformative about it.
Chapter 2 is dedicated to a discussion of the conditions which enabled the emergence of research evaluation systems. Beyond the general phenomena that characterize modern society, such as rationalization, capitalism, or bureaucratization, there are also other elements that should be included as constitutive conditions for evaluative power. The chapter goes on to provide a systematic account of economization and metricization. It argues that these are the two main forces that shape contemporary academia, having enabled the emergence of research evaluation systems. Economization is defined as promoting the idea that science’s economic inputs and products should be utilized for bolstering the economy, while metricization reduces every aspect of science and social life to metrics. The varied impacts of these two forces are described by means of an expansive account of the social processes, values, logics, and technologies of power that undergird today’s research evaluation systems. This chapter therefore lays the foundations for one of the book’s key claims. This is that metrics should be understood as merely a symptom rather than the cause of the difficulties confronting academia today.
Chapter 6 deals with the main areas in which the evaluation game transforms scholarly communication practices. Thus, it focuses on the obsession with metrics as a quantification of every aspect of academic labor; so-called questionable academia, that is the massive expansion of questionable publishers, journals, and conferences; following the metrics deployed by institutions, and changes in publication patterns in terms of publication types, the local or global orientation of research, its contents, and the dominant languages of publications. Finally, the chapter underlines the importance of taking a geopolitically sensitive approach to evaluation games that is able to account for differences in the ways in which the game is played in central versus peripheral countries, as well as in the ways in which such practices are valorized, depending on the location of a given science system. Such differences are not only the result of differential access to resources and shifting power relations but also, as argued in the book, of the historical heritage of capitalist or socialist models in specific countries and institutions.
In Stage 5, the journey moves to meaning relations within sentences, introducing such topics of quantification (including generalized quantifiers), representing events and states, temporal, aspectual, and modal distinctions in semantics, and propositional attitude reports.