We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter considers standardization in molecular communication. Two IEEE standards, 1906.1 and 1906.1.1, have been developed for nanonetworks, in general, and molecular communication, in particular; these standards and their development are described.
Chapter 2 delves into the intricate interactional dynamics of administering cognitive assessments, with a focus on the Addenbrooke’s Cognitive Examination-III (ACE-III). The chapter critically examines the standardisation challenges faced by clinicians in specialised memory assessment services, highlighting the nuanced reasons for non-standardized practices. While cognitive assessments play a pivotal role in diagnosing cognitive impairments, the study questions the assumed standardization of the testing process. Drawing on Conversation Analysis (CA), the authors analyse 40 video-recordings of the ACE-III being administered in clinical practice to reveal variations from standardized procedures. The chapter expands on earlier findings to show how clinicians employ recipient-design strategies during the assessment. It introduces new analyses of practitioner utterances in the third turn, suggesting deviations could be associated with practitioners’ working diagnoses. The chapter contends that non-standard administration is a nuanced response to the interactional and social challenges inherent in cognitive assessments. It argues that clinicians navigate a delicate balance between adhering to standardized procedures and tailoring interactions to individual patient needs, highlighting the complex interplay between clinical demands and recipient design. Ultimately, the chapter emphasizes the importance of understanding the social nature of cognitive assessments and provides insights into the valuable reasons for non-standardized practices in clinical settings.
Many academic and media accounts of the massive spread of English across the globe since the mid-twentieth century rely on simplistic notions of globalization mostly driven by technology and economic developments. Such approaches neglect the role of states across the globe in the increased usage of English and even declare individual choice as a key factor (e.g., De Swaan, 2001; Crystal, 2003; Van Parijs, 2011; Northrup, 2013). This chapter challenges these accounts by using and extending the state traditions and language regimes framework, STLR (Cardinal & Sonntag, 2015). Presenting empirical findings that 142 countries in the world mandate English language education as part of their national education systems, it is suggested there are important similarities with the standardization of national language at the nation-state level especially in the nineteenth century and early twentieth centuries. This work reveals severe limitations of other approaches in political science to global English, including linguistic justice. It is shown how in the case of global English the convergence of diverse language regimes must be distinguished from state traditions but cannot be separated from them. With the severe challenges to global liberal cosmopolitanism, the role of individual state language education policies will become increasingly important.
This article presents a comprehensive evaluation of two nuclear-rated bilateral telerobotic systems, Telbot and Dexter, focusing on critical performance metrics such as effort transparency, stiffness, and backdrivability. Despite the absence of standardized evaluation methodologies for these systems, this study identifies key gaps by experimentally assessing the quantitative performance of both systems under controlled conditions. The results reveal that Telbot exhibits higher stiffness, but at the cost of greater effort transmission, whereas Dexter offers smoother backdrivability. Furthermore, positional discrepancies were observed during the tests, particularly in nonlinear positional displacements. These findings highlight the need for standardized evaluation methods, contributing to the development, manufacturing, and procurement processes of future bilateral telerobotic systems.
Plant names carry a significant amount of information without providing a lengthy description. This is an efficient shorthand for scientists and stakeholders to communicate about a plant, but only when the name is based on a common understanding. It is standard to think of each plant having just two names, a common name and a scientific name, yet both names can be a source of confusion. There are often many common names that refer to the same plant, or a single common name that refers to multiple different species, and some plants have no common name at all. Scientific names are based upon international standards; however, when the taxonomy is not agreed upon, two scientific names may be used to describe the same species. Weed scientists and practitioners can easily memorize multiple plant names and know that they refer to the same species, but when we consider global communication and far-reaching databases, it becomes very relevant to consider two sides of this shift: (1) a need for greater standardization (due to database management and risk of lost data from dropped cross-referencing); and (2) the loss of local heritage, which provides useful meaning through various common names. In addition, weed scientists can be resistant to changing names that they learned or frequently use. The developments in online databases and reclassification of plant taxonomy by phylogenetic relationships have changed the accessibility and role of the list of standardized plant names compiled by the Weed Science Society of America (WSSA). As part of an attempt to reconcile WSSA and USDA common names for weedy plants, the WSSA Standardized Plant Names Committee recently concluded an extensive review of the Composite List of Weeds common names and had small changes approved to about 10% of the list of more than 2,800 distinct species.
The question of how to balance free data flows and national policy objectives, especially data privacy and security, is key to advancing the benefits of the digital economy. After establishing that new digital technologies have further integrated physical and digital activities, and thus, more and more of our social interactions are being sensed and datafied, Chapter 6 argues that innovative regulatory approaches are needed to respond to the impact of big data analytics on existing privacy and cybersecurity regimes. At the crossroads, where multistakeholderism meets multilateralism, the roles of the public and private sectors should be reconfigured for a datafied world. Looking to the future, rapid technological developments and market changes call for further public–private convergence in data governance, allowing both public authorities and private actors to jointly reshape the norms of cross-border data flows. Under such an umbrella, the appropriate role of multilateral, state-based norm-setting in Internet governance includes the oversight of the balance between the free flow of data and other legitimate public policies, as well as engagement in the coordination of international standards.
Mass gatherings are events where many people come together at a specific location for a specific purpose, such as concerts, sports events, or religious gatherings, within a certain period of time. In mass-gathering studies, many rates and ratios are used to assess the demand for medical resources. Understanding such metrics is crucial for effective planning and intervention efforts. Therefore, this systematic review aims to investigate the usage of rates and ratios reported in mass-gathering studies.
Methods:
In this systematic review, the PRISMA guidelines were followed. Articles published through December 2023 were searched on Web of Science, Scopus, Cochrane, and PubMed using the specified keywords. Subsequently, articles were screened based on titles, abstracts, and full texts to determine their eligibility for inclusion in the study. Finally, the articles that were related to the study’s aim were evaluated.
Results:
Out of 745 articles screened, 55 were deemed relevant for inclusion in the study. These included 45 original research articles, three special reports, three case presentations, two brief reports, one short paper, and one field report. A total of 15 metrics were identified, which were subsequently classified into three categories: assessment of population density, assessment of in-event health services, and assessment of out-of-event health services.
Conclusion:
The findings of this study revealed notable inconsistencies in the reporting of rates and ratios in mass-gathering studies. To address these inconsistencies and to standardize the information reported in mass-gathering studies, a Metrics and Essential Ratios for Gathering Events (MERGE) table was proposed. Future research should promote consistency in terminology and adopt standardized methods for presenting rates and ratios. This would not only enhance comparability but would also contribute to a more nuanced understanding of the dynamics associated with mass gatherings.
This chapter examines how ethical frameworks for education have been displaced through processes of standardization both historically and contemporarily. Before turning to current examples, the chapter begins with an analysis of twentieth-century movements in philosophy of education and curriculum to illustrate how processes of standardization and educational “narrowing” emerged as the dominant educational vision for American schooling, corresponding with the push for accountability and neoliberal reform in the last few decades of the twentieth century. How this narrowing exists in today’s K-12 and higher education environment, as well as its impact on historically marginalized groups, is then explored. The chapter then turns to how the contemporary emphasis on educational technology, datafication, and digitalization reinforces educational standardization to the detriment of ethical educational possibilities. The chapter concludes with considerations of how ethical educational visions might be revived in our current era.
Activity biosensors have been used recently to measure and diagnose the physiological status of dairy cows. However, owing to the variety of commercialized activity biosensors available in the market, activity data generated by a biosensor need to be standardized to predict the status of an animal and make relevant decisions. Hence, the objective of this study was to develop a standardization method for accommodating activity measurements from different sensors. Twelve Holstein dairy cows were monitored to collect 12 862 activity data from four types of sensors over five months. After confirming similar cyclic activity patterns from the sensors through correlation and regression analyses, the gamma distribution was employed to calculate the cumulative probability of the values of each biosensor. Then, the activity values were assigned to three levels (i.e., idle, normal and active) based on the defined proportion of each level, and the values at each level from the four sensors were compared. The results showed that the number of measurements belonging to the same level was similar, with less than a 10% difference at a specific threshold value. In addition, more than 87% of the heat alerts generated by the internal algorithm of three of the four biosensors could be assigned to the active level, suggesting that the current standardization method successfully integrated the activity measurements from different biosensors. The developed probability-based standardization method is expected to be applicable to other biosensors for livestock, which will lead to the development of models and solutions for precision livestock farming.
The myth that only one kind of writing is correct is the foundation for all the myths that follow. It starts with early spelling standardization and continues with early usage guides. Its consequences include making enemies of formal and informal writing, and making people think correct writing means one thing – and means a capable and good person. Closer to the truth? Terrible writers can be good people, good writers can be terrible people, and all shared writing includes some fundamental similarities, and some differences. Formal writing fancies nouns more than verbs, for instance, and it likes informational subjects. Informal writing has more equal affection for nouns, verbs, pronouns, and adverbs, and it favors interpersonal subjects.
Molecular techniques are an alternative for the diagnosis of strongyloidiasis, produced by Strongyloides stercoralis. However, it is necessary to determine the best amplification target for the populations of this parasite present in a geographical area and standardize a polymerase chain reaction (PCR) protocol for its detection. The objectives of this work were the comparison of different PCR targets for molecular detection of S. stercoralis and the standardization of a PCR protocol for the selected target with the best diagnostic results. DNA extraction was performed from parasite larvae by saline precipitation. Three amplification targets of the genes encoding ribosomal RNA 18S (18S rDNA) and 5.8S (5.8S rDNA) and cytochrome oxidase 1 (COX1) of S. stercoralis were compared, and the PCR reaction conditions for the best target were standardized (concentration of reagents and template DNA, hybridization temperature, and number of cycles). The analytical sensitivity and specificity of the technique were determined. DNA extraction by saline precipitation made it possible to obtain DNA of high purity and integrity. The ideal target was the 5.8S rDNA, since the 18S rDNA yielded non-reproducible results and COX1 never amplified under any condition tested. The optimal conditions for the 5.8S rDNA-PCR were: 1.5 mM MgCl2, 100 μM dNTPs, 0.4 μM primers, and 0.75 U DNA polymerase, using 35 cycles and a hybridization temperature of 60 °C. The analytical sensitivity of the PCR was 1 attogram of DNA, and the specificity was 100%. Consequently, the 5.8S rDNA was shown to be highly sensitive and specific for the detection of S. stercoralis DNA.
This chapter examines whether the patterns observed for political regimes and Chinese infrastructure spending are also manifested by foreign aid and exports of Chinese digital technologies that promote the adoption of Chinese standards. Analysis of several different datasets consistently show electoral autocracies are the major recipients. The datasets include Chinese smart cities technologies exports, foreign aid in the information and communications technology sector, and foreign deals for Huawei’s cloud infrastructure and e-government services. These quantitative findings are supplemented by case studies of Malaysia (electoral autocracy) and Greece (liberal democracy).
To explain what drives the demand for Chinese infrastructure spending and the adoption of its digital standards among low- and middle-income countries, we must begin by considering how they effectively address market failures. A first set of market failures regards impediments to private investment for building infrastructure. Western multilateral development banks such as the World Bank commonly impose liberalizing conditionalities on recipient states. These can be politically problematic for rulers of autocratic countries that rely on state controls to retain their hold on power. China, by contrast, has an explicit policy of noninterference in the domestic politics of foreign nations. China’s own political motivations coupled with huge dollar reserves have enabled it to effectively address the market failures of autocracies in a politically palatable way. A second type of market failure regards transaction costs and coordination failures. These can be addressed via the adoption of digital technologies. China can leverage its preferential access to autocracies for infrastructure spending in order to promote the adoption of its digital and related technical standards.
People read and write a range of English every day, yet what counts as 'correct' English has been narrowly defined and tested for 150 years. This book is written for educators, students, employers and scholars who are seeking a more just and knowledgeable perspective on English writing. It brings together history, headlines, and research with accessible visuals and examples, to provide an engaging overview of the complex nature of written English, and to offer a new approach for our diverse and digital writing world. Each chapter addresses a particular 'myth' of “correct” writing, such as 'students today can't write' or 'the internet is ruining academic writing', and presents the myth's context and consequences. By the end of the book, readers will know how to go from hunting errors to seeking (and finding) patterns in English writing today. This title is also available as open access on Cambridge Core.
This essay traces the relationship between colonization and academia over recent times, associating it with various intellectual moves to interrogate the hegemonic assumptions of Western culture. It argues that racial representations in literature are multifaceted and variegated, with literary studies offering opportunities to effectively demystify myths about the putative universality of American or European subject systems. This is tied to the historical specificity of particular colonial situations, with reference to the work of Nicholas Thomas, noting how one significant contribution of Australian cultural theory to literary studies has been to make debates around settler colonial paradigms more prominent. This leads into discussion of larger questions around regional autonomy, cultural appropriation and social class, with reference to the work of Walter Mignolo and Stuart Hall. It also touches upon political controversies involving with the highly problematic relationship between academic and civic authorities, a continuing power struggle that can be traced back to medieval times. The essay concludes that the etymological links between university and universality offer scope to resist local standardizations of all kinds, and in this sense a decentring of racial hierarchies runs in parallel to a decentring of geographical hierarchies.
For decades, standards were perceived to be gender-neutral. However, recent research by the Standards Council of Canada has challenged that assumption. The research found that standardization was associated with a reduction in unintentional fatalities for men, but not for women. The research aligns with sector-specific research and anecdotal evidence that standards are more effective at protecting men compared to women. This is significant because standards form the building blocks of how products, processes, and services are designed and made to be interoperable. Therefore, standards, and the products and services that are standardized according to them, are largely designed by men, for men. This chapter aims to explore the interconnected nature of gender, standards, and trade to argue that the lack of gender-responsiveness of standards has a negative impact on the safety and well-being of women. Furthermore, the link between standardization and trade will highlight the importance of improving the gender-responsiveness of standards given their role in the proliferation of goods, and the different initiatives that are currently underway.
This chapter reviews what we know about scribal practices of orthography (focusing on spelling), how their orthographies have been studied and interpreted, and where avenues of future research lie. It covers fundamental aspects of studying scribes, showing the multidisciplinary interest in scribes and providing a broad background for thinking about scribal variation in orthography. It discusses issues such as the term and concept of a scribe, the contexts in which scribes worked, and how the role of the scribe has changed over time. The chapter focuses on research concerning scribal orthographies within three broad contexts: studies focusing on phonology and phonetics but using scribal orthography as the source of information; research that concentrates on the intersection of phonology/phonetics and orthography; and studies that are interested in orthography as an exclusively or primarily written phenomenon. It also addresses the issue of orthographic standardization specifically, as scribes have been seen as central in this process, and touches on the various frameworks and approaches adopted for the study and interpretation of spelling regularization and standardization. Finally, the chapter points to some of the avenues open for new discoveries in the future.
This chapter introduces readers to the concept of spelling standardization, offering an overview of the ways in which spelling standardization occurred, the agents behind the modern-like developments in historical spelling, and the chronology of the process of development in historical English. The chapter departs from the idea that historical spelling represents one of the most complex facets of linguistic standardization, and one where disagreements exist about its overall process of development. The contribution moves on to discuss the idea that standardization in English spelling was, for some scholars, an intralinguistic, spontaneous process of self-organization, and for others a multiparty affair that involved authors, readers, the printing press and linguistic commentators of the time. The final section summarizes findings from recent work that focuses on large-scale developments over the sixteenth and the seventeenth century, and overviews the role and relevance of theoreticians, schoolmasters, authors and readers in Early Modern English spelling.
We don’t always have a single response variable, and disciplines like community ecology or the new “omics” bring rich datasets. Chapters 14–16 introduce the treatment of these multivariate data, with multiple variables recorded for each unit or “object.” We start with how we measure association between variables and use eigenanalysis to reduce the original variables to a smaller number of summary components or functions while retaining most of the variation. Then we look at the broad range of measures of dissimilarity or distance between objects based on the variables. Both approaches allow examination of relationships among objects and can be used in linear modeling when response and predictor variables are identified. We also highlight the important role of transformations and standardizations when interpreting multivariate analyses.