We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A universal basic income is widely endorsed as a critical feature of effective governance. It is also growing in popularity in an era of substantial collective wealth alongside growing inequality. But how could it work? Current economic policies necessarily influence wealth distributions, but they are often sufficiently complicated that they hide their inefficiencies. Simplifications based on network science can offer plausible solutions and even offer ways to base universal basic income on merit. Here we will examine a case study based on a universal basic income for researchers. This is an important case because numerous funding agencies currently require costly proposal processes with high administrative costs. These are costly for the proposal writers, their evaluators, and the progress of science itself. Moreover, the outcomes are known to be biased and inefficiently managed. Network science can help us redesign funding allocations in a less costly and potentially more equitable way.
As its name indicates, algorithmic regulation relies on the automation of regulatory processes through algorithms. Examining the impact of algorithmic regulation on the rule of law hence first requires an understanding of how algorithms work. In this chapter, I therefore start by focusing on the technical aspects of algorithmic systems (Section 2.1), and complement this discussion with an overview of their societal impact, emphasising their societal embeddedness and the consequences thereof (Section 2.2). Next, I examine how and why public authorities rely on algorithmic systems to inform and take administrative acts, with special attention to the historical adoption of such systems, and their impact on the role of discretion (Section 2.3). Finally, I draw some conclusions for subsequent chapters (Section 2.4).
This Element engages with the epistemic significance of disagreement, focusing on its skeptical implications. It examines various types of disagreement-motivated skepticism in ancient philosophy, ethics, philosophy of religion, and general epistemology. In each case, it favors suspension of judgment as the seemingly appropriate response to the realization of disagreement. One main line of argument pursued in the Element is that, since in real-life disputes we have limited or inaccurate information about both our own epistemic standing and the epistemic standing of our dissenters, personal information and self-trust can rarely function as symmetry breakers in favor of our own views.
A core normative assumption of welfare economics is that people ought to maximise utility and, as a corollary of that, they should be consistent in their choices. Behavioural economists have observed that people demonstrate systematic choice inconsistences, but rather than relaxing the normative assumption of utility maximisation they tend to attribute these behaviours to individual error. I argue in this article that this, in itself, is an error – an ‘error error’. In reality, a planner cannot hope to understand the multifarious desires that drive a person’s choices. Consequently, she is not able to discern which choice in an inconsistent set is erroneous. Moreover, those who are inconsistent may view neither of their choices as erroneous if the context reacts meaningfully with their valuation of outcomes. Others are similarly opposed to planners paternalistically intervening in the market mechanism to correct for behavioural inconsistencies, and advocate that the free market is the best means by which people can settle on mutually agreeable exchanges. However, I maintain that policymakers have a legitimate role in also enhancing people’s agentic capabilities. The most important way in which to achieve this is to invest in aspects of human capital and to create institutions that are broadly considered foundational to a person’s agency. However, there is also a role for so-called boosts to help to correct basic characterisation errors. I further contend that government regulations against self-interested acts of behavioural-informed manipulation by one party over another are legitimate, to protect the manipulated party from undesired inconsistency in their choices.
The identified victim effect is the phenomenon in which people tend to contribute more to identified than to unidentified victims. Kogut and Ritov (Journal of Behavioral Decision Making, 18(3), 157–167, 2005) found that the identified victim effect was limited to a single victim and driven by empathic emotions. In a pre-registered experiment with an online U.S. American MTurk sample on CloudResearch (N = 2003), we conducted a close replication and extension of Experiment 2 from Kogut and Ritov (Journal of Behavioral Decision Making, 18(3), 157–167, 2005). The replication findings failed to provide empirical support for the identified single victim effect hypothesis since we found no evidence of differences in willingness to contribute when comparing a single identified victim to a single unidentified victim (η2p = .00, 90% CI [0.00, 0.00]), and no indication for the target article’s interaction between singularity and identifiability (original: η2p = .062, 90% CI [0.01, 0.15]; replication: η2p = .00, 90% CI [0.00, 0.00]). Extending the replication to conduct a conceptual replication of Kogut and Ritov (Organizational Behavior and Human Decision Processes, 104(2), 150–157, 2007), we investigated a boundary condition of the effect—group belonging. We found support for an ingroup bias in helping behaviors and indications for empathic emotions and perceived responsibility contributing to this effect. We discuss differences between our study and the target article and implications for the literature on the identified victim effect.
At the basis of many important research questions is causality – does X causally impact Y? For behavioural and psychiatric traits, answering such questions can be particularly challenging, as they are highly complex and multifactorial. ‘Triangulation’ refers to prospectively choosing, conducting and integrating several methods to investigate a specific causal question. If different methods, with different sources of bias, all indicate a causal effect, the finding is much less likely to be spurious. While triangulation can be a powerful approach, its interpretation differs across (sub)fields and there are no formal guidelines. Here, we aim to provide clarity and guidance around the process of triangulation for behavioural and psychiatric epidemiology, so that results of existing triangulation studies can be better interpreted, and new triangulation studies better designed.
Methods
We first introduce the concept of triangulation and how it is applied in epidemiological investigations of behavioural and psychiatric traits. Next, we put forth a systematic step-by-step guide, that can be used to design a triangulation study (accompanied by a worked example). Finally, we provide important general recommendations for future studies.
Results
While the literature contains varying interpretations, triangulation generally refers to an investigation that assesses the robustness of a potential causal finding by explicitly combining different approaches. This may include multiple types of statistical methods, the same method applied in multiple samples, or multiple different measurements of the variable(s) of interest. In behavioural and psychiatric epidemiology, triangulation commonly includes prospective cohort studies, natural experiments and/or genetically informative designs (including the increasingly popular method of Mendelian randomization). The guide that we propose aids the planning and interpreting of triangulation by prompting crucial considerations. Broadly, its steps are as follows: determine your causal question, draw a directed acyclic graph, identify available resources and samples, identify suitable methodological approaches, further specify the causal question for each method, explicate the effects of potential biases and, pre-specify expected results. We illustrated the guide’s use by considering the question: ‘Does maternal tobacco smoking during pregnancy cause offspring depression?’.
Conclusions
In the current era of big data, and with increasing (public) availability of large-scale datasets, triangulation will become increasingly relevant in identifying robust risk factors for adverse mental health outcomes. Our hope is that this review and guide will provide clarity and direction, as well as stimulate more researchers to apply triangulation to causal questions around behavioural and psychiatric traits.
Dictionaries are an ancient and ubiquitous genre, flourishing wherever and whenever humans flourish, but it’s important to remember that dictionaries aren’t products of human biology or necessity; they are products of human creativity and community: dictionaries are cultural and therefore political. This chapter explores what it means to understand that simple fact. Dictionaries are partisan systems of ordering words and meanings. They may aim to be universal, but they inevitably emerge from, record, and respond to social moments from particular perspectives. Those perspectives may seek to celebrate or denigrate certain cultural groups, legitimate or suppress certain languages, facilitate social mobility or discrimination. Dictionaries may highlight their cultural positionality as such for political or commercial profit, or they may cast their subjective styles as objective and universal for the same political or commercial profit. In all events, dictionaries end up documenting cultural information in their definitions, usage labels and notes, illustrative examples and quotations, inserts and appendices, and beyond. And, again in all events, dictionaries can have cultural impacts entirely unintended or unanticipated by their makers, running from the positive and life affirming to the dehumanizing and antisocial.
Whatever their private religious convictions, nearly all contemporary psychologists of religion – when they act in professional roles – agree to operate in accordance with scientific rules. Recognition of the imperfections of individual methodologies has led to an emphasis on testing theories and verifying “facts” in multiple studies. Most of this chapter explores the pros and cons associated with various research methods, including experimentation, observation, and survey research. Although the logic of experimentation is undeniable and psychologists in various subfields frequently deem it the method of choice, many questions that we most want to answer in the psychology of religion cannot be addressed through experiments that are feasible, ethical, and convincing. Thus, the psychology of religion has always relied heavily on quantitative and qualitative survey research studies. Good surveys must strive to avoid biases rooted in question wording, question order, mode of data collection, social desirability, attitude-behavior discrepancies, and the tendency to overreport religious behavior. Fortunately, many existing measures of religious attitudes and behaviors have good psychometric qualities.
Emotion recognition in conversation (ERC) faces two major challenges: biased predictions and poor calibration. Classifiers often disproportionately favor certain emotion categories, such as neutral, due to the structural complexity of classifiers, the subjective nature of emotions, and imbalances in training datasets. This bias results in poorly calibrated predictions where the model’s predicted probabilities do not align with the true likelihood of outcomes. To tackle these problems, we introduce the application of conformal prediction (CP) into ERC tasks. CP is a distribution-free method that generates set-valued predictions to ensure marginal coverage in classification, thus improving the calibration of models. However, inherent biases in emotion recognition models prevent baseline CP from achieving a uniform conditional coverage across all classes. We propose a novel CP variant, class spectrum conformation, which significantly reduces coverage bias in CP methods. The methodologies introduced in this study enhance the reliability of prediction calibration and mitigate bias in complex natural language processing tasks.
How do residents evaluate zoning relief applications for new houses of worship? Do they decide based on the facility’s expected level of nuisance, the religion of the house of worship, or the attitudes of neighbors and local officials? Using a conjoint survey experiment, this paper shows that religion is the most important predictor of resistance. People are more likely to resist new mosques than Christian churches, irrespective of other facility properties. Furthermore, this paper highlights the significant role of partisanship in residents’ evaluation of zoning relief applications. Republican respondents were more likely to reject minority houses of worship and support Christian churches than Democrats, moderating the influence of religion. Such bias has important implications for the zoning relief application process. Local officials should evaluate residents’ opposition differently when the application concerns minority groups.
In order to set the scene for this volume, I begin the chapter with a narrative of my experience on the day when I got promoted to a professor at a Japanese university by combining my professional experience to the ideologies of native-speakerism and trans-speakerism. I also include the overall background of the study through providing the aims of the research, explicating the significance of the current inquiry, and outlining the core ideas of this book: native-speakerism and trans-speakerism. In other words, this chapter delineates how these two influential ideologies in language education come together in this book and makes a case for why the present inquiry is a fertile endeavor to make. The chapter concludes with a brief description of the structure and content of the volume.
This chapter describes the origin, development, and subsequent conceptualization of native-speakerism, as well as profiling relevant research on it. Importantly, it also offers my own understanding and interpretation about the notion. To do this, the chapter presents a selection of previous discussions and empirical research into native-speakerism and documents some of the most deleterious effects that native-speakerism has had on the lives and identities of both NESTs and NNESTs worldwide – specifically the effects on NNES teachers and researchers in Japan, who are the focal points of this book. I commence this chapter with a theoretical overview of the ways in which native-speakerism came to be recognized and defined. I then introduce my own analysis of what native-speakerism entails at this moment in time within the ELT field. Afterward, I provide an overview of germane conceptual and empirical studies on native-speakerism while offering my own critiques of them as I do so. With the key points and positionings of the discussion established, I conclude this chapter by positing my case for the originality and significance of this endeavor and by presenting the focal research questions which the research within this volume attempts to answer.
This final chapter, divided into three major parts, draws together the findings and discussion presented in previous chapters and provides recommendations and implications for stakeholders and researchers. The first part outlines summaries of findings and relevant literature followed by a conceptual framework, which exhibits diagrammatically the findings of this study and their interrelationships. The second part regards recommendations for further research and the limitations of my study. A narrative of my reflection on the journey of the study and the writing of this book concludes the volume.
The goal of this paper is to systematically review the literature on United States Department of Agriculture (USDA) forecast evaluation and critically assess their methods and findings. The fundamental characteristics of optimal forecasts are bias, accuracy and efficiency as well as encompassing and informativeness. This review revealed that the findings of these studies can be very different based on the forecasts examined, commodity, sample period, and methodology. Some forecasts performed very well, while others were not very reliable, resulting in forecast specific optimality record. We discuss methodological and empirical contributions of these studies as well as their shortcomings and potential opportunities for future work.
To evaluate how study characteristics and methodological aspects compare based on presence or absence of industry funding, Hughes et al. conducted a systematic survey of randomized controlled trials (RCTs) published in three major medical journals. The authors found industry-funded RCTs were more likely to be blinded, post results on a clinical trials registration database (ClinicalTrials.gov), and accrue high citation counts.1 Conversely, industry-funded trials had smaller sample sizes and more frequently used placebo as the comparator, used a surrogate as their primary outcome, and had positive results.
Native-speakerism is a deeply embedded prejudice that perpetuates unequal power dynamics in language education. By introducing the liberating concept of trans-speakerism, this innovative book dismantles prevalent biases and reshapes the discourse in the field. It proposes inclusive designations such as global speaker of English (GSE), global teacher of English (GTE), and global Englishes researcher (GER), and urges a shift away from labels that maintain marginalization. By systematically reviewing previous studies, it challenges native-speakerism, and seeks to advance diversity, equity, and inclusion for all language speakers, teachers and researchers – transcending the limitations imposed by speakerhood statuses. The volume features the voices of non-native English-speaking (NNES) secondary school teachers, graduate students, and university professors in Japan, highlighting the strengths, interests, and uniqueness of language practitioners and researchers – both intellectually and emotionally. It ultimately encourages all language educators, researchers, and policymakers to oppose biases, welcome linguistic diversity, and develop inclusive language education environments.
The chapter begins by probing skeptical criticism, with key contributors like Stegenga (2018) questioning our unwavering trust in contemporary medicine. Next, it delves into the criticism of overmedicalization (see Moynihan and Cassels 2005; Conrad 2007; Le Fanu 2012; Parens 2013), viewed as an inappropriate use of medical resources for sociopolitical issues. The chapter also investigates the criticism of objectification related to the quality of care, drawing from thinkers like Cassell (2004), Haque and Waytz (2012), and Topol (2019). Rounding out the chapter, utilizing insights from Popper (2000) and Haslanger (2018), it identifies these criticisms as both social and internal to the practice of medicine. It concludes that medicine is falling short of its own standards, thereby posing fundamental questions about its nature and purpose to be explored in the succeeding chapters.
The chapter revisits the criticisms and challenges presented at the book’s outset. It highlights how the book’s central theses - the Systematicity, Understanding, and Autonomy Theses - help resolve issues related to skepticism, overmedicalization, and objectification in medicine. The chapter argues that a moderate position, supported by these theses, provides better understanding of these challenges and suggests potential solutions. The criticisms of skepticism are countered by increased systematicity in knowledge-seeking. Concerns of overmedicalization are tackled through the Autonomy Thesis, which argues that medicalization is justifiable if a condition is harmful and adequately understood by medicine. Objectification, as examined through the Autonomy Thesis, can impede medicine’s aim by undermining personal understanding. The chapter emphasizes the necessity of counteracting the potential decrease in personal understanding caused by standardization and technological advances.
The general public and scientific community alike are abuzz over the release of ChatGPT and GPT-4. Among many concerns being raised about the emergence and widespread use of tools based on large language models (LLMs) is the potential for them to propagate biases and inequities. We hope to open a conversation within the environmental data science community to encourage the circumspect and responsible use of LLMs. Here, we pose a series of questions aimed at fostering discussion and initiating a larger dialogue. To improve literacy on these tools, we provide background information on the LLMs that underpin tools like ChatGPT. We identify key areas in research and teaching in environmental data science where these tools may be applied, and discuss limitations to their use and points of concern. We also discuss ethical considerations surrounding the use of LLMs to ensure that as environmental data scientists, researchers, and instructors, we can make well-considered and informed choices about engagement with these tools. Our goal is to spark forward-looking discussion and research on how as a community we can responsibly integrate generative AI technologies into our work.