We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A detailed exploration is presented of the integration of human–machine collaboration in governance and policy decision-making, against the backdrop of increasing reliance on artificial intelligence (AI) and automation. This exploration focuses on the transformative potential of combining human cognitive strengths with machine computational capabilities, particularly emphasizing the varying levels of automation within this collaboration and their interaction with human cognitive biases. Central to the discussion is the concept of dual-process models, namely Type I and II thinking, and how these cognitive processes are influenced by the integration of AI systems in decision-making. An examination of the implications of these biases at different levels of automation is conducted, ranging from systems offering decision support to those operating fully autonomously. Challenges and opportunities presented by human–machine collaboration in governance are reviewed, with a focus on developing strategies to mitigate cognitive biases. Ultimately, a balanced approach to human–machine collaboration in governance is advocated, leveraging the strengths of both humans and machines while consciously addressing their respective limitations. This approach is vital for the development of governance systems that are both technologically advanced and cognitively attuned, leading to more informed and responsible decision-making.
During the Cold War, logical rationality – consistency axioms, subjective expected utility maximization, Bayesian probability updating – became the bedrock of economics and other social sciences. In the 1970s, logical rationality underwent attack by the heuristics-and-biases program, which interpreted the theory as a universal norm of how individuals should make decisions, although such an interpretation is absent in von Neumann and Morgenstern’s foundational work and dismissed by Savage. Deviations in people’s judgments from the theory were thought to reveal stable cognitive biases, which were in turn thought to underlie social problems, justifying governmental paternalism. In the 1990s, the ecological rationality program entered the field, based on the work of Simon. It moves beyond the narrow bounds of logical rationality and analyzes how individuals and institutions make decisions under uncertainty and intractability. This broader view has shown that many supposed cognitive biases are marks of intelligence rather than irrationality, and that heuristics are indispensable guides in a world of uncertainty. The passionate debate between the three research programs became known as the rationality wars. I provide a brief account from the ‘frontline’ and show how the parties understood in strikingly different ways what the war entailed.
Recent reviews and meta-analyses of metacognitive therapy for schizophrenia-spectrum disorder (SSD) have included uncontrolled studies, single-session interventions, and/or analyses limited to a single form of metacognitive therapy. We sought to evaluate the efficacy of metacognitive therapies more broadly based on controlled trials (CT) of sustained treatments. We conducted a pre-registered meta-analysis of controlled trials that investigated the effects of meta-cognitive therapies on primary positive symptom outcomes, and secondary symptom, function and/or insight measures. Electronic databases were searched up to March 2022 using variants of the keywords, ‘metacognitive therapy’, ‘schizophrenia’, and ‘controlled trial’. Studies were identified and screened according to PRISMA guidelines. Outcomes were assessed with random effects models and sample, intervention, and study quality indices were investigated as potential moderators. Our search identified 44 unique CTs with usable data from 2423 participants. Data were extracted by four investigators with reliability >98%. Results revealed that metacognitive therapies produced significant small-to-moderate effects on delusions (g = 0.32), positive symptoms (g = 0.30) and psychosocial function (g = 0.31), and significant, small effects on cognitive bias (g = 0.25), negative symptoms (g = 0.24), clinical insight (g = 0.29), and social cognition (g = 0.27). Findings were robust in the face of sample differences in age, education, gender, antipsychotic dosage, and duration of illness. Except for social cognition and negative symptoms, effects were evident even in the most rigorous study designs. Thus, results suggest that metacognitive therapies for SSD benefit people, and these benefits transfer to function and illness insight. Future research should modify existing treatments to increase the magnitude of treatment benefits.
The Cognitive Bias (CogBIAS) hypothesis proposes that cognitive biases develop as a function of environmental influences (which determine the valence of biases) and the genetic susceptibility to those influences (which determines the potency of biases). The current study employed a longitudinal, polygenic-by-environment approach to examine the CogBIAS hypothesis. To this end, measures of life experiences and polygenic scores for depression were used to assess the development of memory and interpretation biases in a three-wave sample of adolescents (12–16 years) (N = 337). Using mixed effects modeling, three patterns were revealed. First, positive life experiences (PLEs) were found to diminish negative and enhance positive forms of memory and social interpretation biases. Second, and against expectation, negative life experiences and depression polygenic scores were not associated with any cognitive outcomes, upon adjusting for psychopathology. Finally, and most importantly, the interaction between high polygenic risk and greater PLEs was associated with a stronger positive interpretation bias for social situations. These results provide the first line of polygenic evidence in support of the CogBIAS hypothesis, but also extend this hypothesis by highlighting positive genetic and nuanced environmental influences on the development of cognitive biases across adolescence.
Human minds are particularly biased when processing information in digital environments. Behavioral economics has highlighted many cognitive biases that afflict our economic decision making. We may choose people like ourselves for important jobs or we may focus on irrelevant characteristics. We may also focus on recent, available information because our brains interpret that as more relevant for the current situation, whereas, optimally, we might benefit from a deeper dive into collecting more representative or comprehensive data and analyzing it appropriately. Even the way information is presented influences whether we believe it. Designers of digital content and experiences need to be aware of and account for such biases when engaging users.
Behavioral economics began with the promise to fill the psychological blind spot in neoclassical theory, and ended up portraying intuition as the source of irrationality. The portrait goes like this: people have systematic cognitive biases causing substantial costs, biases are persistent like visual illusions and hardly educable, therefore governments need to step in and steer people with the help of “nudges.” The biases have taken on the status of truism. In contrast, I show that this view of human nature is tainted by a “bias bias,” the tendency to spot biases even if there are none. This involves failing to notice when sample parameters differ from population parameters, mistaking people’s random error for systematic error, and confusing intelligent inferences with logical errors. I use celebrated biases to explain the general problem. Getting rid of the bias bias will be a precondition for a positive role of human intuition and psychology in general.
The Conclusion revisits the general considerations introduced at the beginning of the book: what values are reflected in decisions to enforce some family law agreements, to refuse to enforce some others, and to regulate in various ways the rest. The summary urges both a general presumption of enforceability and a presciption of regulatory restrictions fitted to the different transaction types.
Is it possible to exploit cognitive biases so that a non-professional taster prefers one wine to several other absolutely identical wines? To address this question, three complementary experiments were carried out. Each time, five wines were tasted blind in a tasting laboratory by 24 to 34 tasters. Converging evidence from the experiments shows that participants were not capable of identifying that some of the wines they were tasting were absolutely identical. Moreover, the results show that by providing information about the wines’ ratings, prices, or reputation, tasters’ expectations can be modified, and, as a result, their evaluations of the wines can be altered. Specifically, we show that it is possible to modify the ranking between different wines and to get tasters to prefer a wine over other absolutely identical wines. Finally, a surprising finding was that experienced tasters express stronger opinions and adapt their evaluations more strongly after being given manipulative information on the wines they taste.
People must often make inferences about, and decisions concerning, a highly complex and unpredictable world, on the basis of sparse evidence. An “ideal” normative approach to such challenges is often modeled in terms of Bayesian probabilistic inference. But for real-world problems of perception, motor control, categorization, language comprehension, or common-sense reasoning, exact probabilistic calculations are computationally intractable. Instead, we suggest that the brain solves these hard probability problems approximately, by considering one, or a few, samples from the relevant distributions. By virtue of being an approximation, the sampling approach inevitably leads to systematic biases. Thus, if we assume that the brain carries over the same sampling approach to easy probability problems, where the “ideal” solution can readily be calculated, then a brain designed for probabilistic inference should be expected to display characteristic errors. We argue that many of the “heuristics and biases” found in human judgment and decision-making research can be reinterpreted as side effects of the sampling approach to probabilistic reasoning.
Describes insights from behavioural economics that challenge the standard assumptions about consumer and firm behaviour. Considers the implications of these insights for economic regulation
Joan Costa-Font, London School of Economics and Political Science,Tony Hockley, London School of Economics and Political Science,Caroline Rudisill, University of South Carolina
This chapter provides an introduction to behavioural health economics. Far from attempting to replace what we know about health economics as a discipline, behavioural health economics aims at complementing its foundations by relaxing some of its core assumptions. This implies taking a more ‘realistic depiction’ of individual motivation even though it makes it more complex work beyond simple mathematical formulation. By incorporating what are otherwise anomalies of rational decision-making (defined as purposeful decision-making), health economics can go the extra mile with this extended toolkit which we define as behavioural health economics. Our agent is constrained by the social norms of its place and suffers from status quo bias and endowment effects that introduce bias into making decision and evaluations. ‘Real individuals’ care about others and have social preferences with regard to other people’s well-being, and often suffer from self-control problems, where impulsivity and emotion translate into suffering from a specific form of short sightedness otherwise known as ‘present bias’). These problems are arguably more prominent in the health domain. Market price is not the only relevant variable guiding behaviour in health and health care, where insurance is the most common form of payment, and tangible monetary incentives are often not made salient to influence behaviour.
Joan Costa-Font, London School of Economics and Political Science,Tony Hockley, London School of Economics and Political Science,Caroline Rudisill, University of South Carolina
This chapter examines several behavioural regularities explaining health behaviours that provide alternative behavioural explanations of actual preventative choices (e.g., smoking, weight loss, exercise, safe sex). The chapter discusses the roles of taxes and information and how social incentives and designs that incorporate social and monetary incentives keeping in mind biases such as loss aversion can help change behaviour. The chapter describes biases related to prevention failures such as optimism, present and status quo biases and includes examples of prevention failures in health-related behaviours.
The manner in which heuristics and biases influence clinical decision-making has not been fully investigated and the methods previously used have been rudimentary.
Aims:
Two studies were conducted to design and test a trial-based methodology to assess the influence of heuristics and biases; specifically, with a focus on how practitioners make decisions about suitability for therapy, treatment fidelity and treatment continuation in psychological services.
Method:
Study 1 (N=12) used a qualitative design to develop two clinical vignette-based tasks that had the aim of triggering heuristics and biases during clinical decision making. Study 2 (N=133) then used a randomized crossover experimental design and involved psychological wellbeing practitioners (PWPs) working in the Improving Access to Psychological Therapies (IAPT) programme in England. Vignettes evoked heuristics (anchoring and halo effects) and biased responses away from normative decisions. Participants completed validated measures of decision-making style. The two decision-making tasks from the vignettes yielded a clinical decision score (CDS; higher scores being more consistent with normative/unbiased decisions).
Results:
Experimental manipulations used to evoke heuristics did not significantly bias CDS. Decision-making style was not consistently associated with CDS. Clinical decisions were generally normative, although with some variability.
Conclusions:
Clinical decision-making can be ‘noisy’ (i.e. variable across practitioners and occasions), but there was little evidence that this variability was systematically influenced by anchoring and halo effects in a stepped-care context.
Imagine that you have just received a colon cancer diagnosis and need to choosebetween two different surgical treatments. One surgery, the "complicatedsurgery," has a lower mortality rate (16% vs. 20%) but compared to the othersurgery, the "uncomplicated surgery," also carries an additional 1% risk of eachof four serious complications: colostomy, chronic diarrhea, wound infection, oran intermittent bowel obstruction. The complicated surgery dominates theuncomplicated surgery as long as life with complications is preferred overdeath.
In our first survey, 51% of a sample (recruited from the cafeteria of auniversity medical center) selected the dominated alternative, the uncomplicatedsurgery, justifying this choice by saying that the death risks for the twosurgeries were essentially the same and that the uncomplicated surgery avoidedthe risk of complications. In follow-up surveys, preference for theuncomplicated surgery remained relatively consistent (39%-51%) despite (a)presenting the risks in frequencies rather than percents, (b) grouping the 4complications into a single category, or (c) giving the uncomplicated surgery asmall chance of complications as well. Even when a pre-decision "focusingexercise" required people to state directly their preferences between life witheach complication versus death, 49% still chose the uncomplicated surgery.
People’s fear of complications leads them to ignore important differencesbetween treatments. This tendency appears remarkably resistant to debiasingapproaches and likely leads patients to make healthcare decisions that areinconsistent with their own preferences.
Numerical performance information is increasingly important to political decision-making in the public sector. Some have suggested that biases in citizens’ processing of numerical information can be exploited by politicians to skew citizens’ perception of performance. I report on an experiment on how citizens evaluate numerical performance information from a public school context. The experiment is conducted with a large and diverse sample of the Danish population (N=1156). The analysis shows a strong leftmost-digit-bias in citizens’ evaluation of school grading information. Thus, very small changes in reported average grades, which happen to shift the leftmost grade digit, can lead to very large shifts in citizens’ evaluation of performance. The rightmost digit on the grade is almost fully ignored.
Across sign languages, nouns can be derived from verbs through morphophonological changes in movement by (1) movement reduplication and size reduction or (2) size reduction alone. We asked whether these cross-linguistic similarities arise from cognitive biases in how humans construe objects and actions. We tested nonsigners’ sensitivity to differences in noun–verb pairs in American Sign Language (ASL) by asking MTurk workers to match images of actions and objects to videos of ASL noun–verb pairs. Experiment 1a’s match-to-sample paradigm revealed that nonsigners interpreted all signs, regardless of lexical class, as actions. The remaining experiments used a forced-matching procedure to avoid this bias. Counter our predictions, nonsigners associated reduplicated movement with actions not objects (inversing the sign language pattern) and exhibited a minimal bias to associate large movements with actions (as found in sign languages). Whether signs had pantomimic iconicity did not alter nonsigners’ judgments. We speculate that the morphophonological distinctions in noun–verb pairs observed in sign languages did not emerge as a result of cognitive biases, but rather as a result of the linguistic pressures of a growing lexicon and the use of space for verbal morphology. Such pressures may override an initial bias to map reduplicated movement to actions, but nevertheless reflect new iconic mappings shaped by linguistic and cognitive experiences.
Cognitive biases are a core feature of psychotic disorders. Moreover, people with first episode of psychosis (FEP) have more difficulties in social cognition, in particular in theory of mind. On the other hand, deficits in processing speed and distractibility appear to be core features of attention deficit hyperactivity disorder (ADHD) and impairment in these basic processes can lead to deficits in more complex functions, that could induced to cognitive biases.
Objectives
To evaluate whether FEP with and without ADHD differ in the rate and type of cognitive biases.
Methods
Participants 121 FEP treated at the Early Intervention Service of Reus and aged between 14 and 28 years. Instruments : The Diagnostic Interview for ADHD (DIVA) and the Cognitive Biases Questionnaire for Psychosis (CBQp) measuring 2 themes : anomalous perception (AP) and threatening events (TE) and 5 cognitive biases: Intentionalising (Int) , Catastrophising (Cat), Dichotomous thinking (DT), Jumping to conclusions (JTC) and Emotional reasoning (ER)
Results
31 out 121 (25.6%) met criteria for childhood ADHD. Compared with FEP ADHD- , FEP-ADHD+ presented significant higher scores in the CBQp total score (U= 2.538 ; p=0.001), the AP theme (U=2.262; p=0.02) , the TE theme (U= 2.242 ; p=0.02) and DT bias ((U= 2.188 ; p=0.03)
Conclusions
Our findings support the fact that subjects with FEP-ADHD+ presented more cognitive biases than those ADHD-. So, FEP-ADHD+ subjects could represent a clinical subgroup with a worse prognosis than FEP-ADHD- subjects, presenting more delusions, distress and a worse cognitive insight.
Forensic scientists are influential players in the justice system. At least two reasons may account for the great confidence placed in forensics. On the one hand, most people (and judges) have a rather poor science education, which leads them to place disproportionate expectations on the analysis produced by forensic science labs. On the other hand, DNA profiling has also contributed decisively to the prestige of forensic science. Unfortunately, there is no reason for such strong confidence, and experience shows that forensic science errors are also possible.
This special issue looks at how cognitive bias matters to international law. We wish to shed light on the legal frames, labels, and cognitive biases that shape our understanding of international rules, the application of these rules, and outcomes of international adjudicatory processes. Adopting the behavioral approach to international law, we focus on actual behavior rather than assumed behavior of actors taking part in the international legal process. The central idea of this approach is that human cognitive capacities are limited—or bounded—by a variety of cognitive, emotional and social, or group-based biases. Our aim is to explore how these biases operate on the individual, group, and state level in various spheres of international law. This Symposium therefore looks beyond the traditional understanding of international law as applying between states, and focuses on how individuals, as actors in the international sphere, use international law language to influence other people, to create communities, and to shape identities.
This Introduction first serves to explain the type of shortcuts we make in our decision-making. This description of biases is followed by an overview of behavioral literature in international law that has thus far examined how bias operates in different aspects of international law—in relation to sources, to compliance, and individuals taking part in the international legal process. We then turn to introduce the Symposium and explain its contribution to the existing literature.
The vast majority of people diagnosed with a life-threatening illness want to survive that illness. (A few will take the position that they’ve had a good long life and that treatment to prolong that life further isn’t necessary or desirable.) People in the striving for survival group – that vast majority – naturally want to make ‘good’ (i.e., rational) decisions about treatment to maximize their chances. Some patients will, early on in the process, consider the balance between surviving (increasing quantity of remaining life) and thriving (maintaining quality of life). The problem is that, as several of the preceding chapters have demonstrated, physicians and patients struggle mightily to have timely and honest conversations about the prognosis, the harms and potential benefits of treatment options, and the burdens of life-prolonging technology when the patient reaches the terminal stage of an illness.