To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Despite its explosive growth, there is considerable disagreement about the fundamental purpose of ESG. Two types of policies associated with ESG metrics and mechanisms give rise to at least two opposing views of their purpose: “profit-maximizing policies” versus “normative sustainable policies.” This chapter advocates the second type of strategy, arguing that corporate leaders who embrace ESG should be open to adopting a purpose that may undermine or even intentionally sacrifice shareholder wealth. In defending this view, the chapter considers the question of who has the legal, political, and moral authority to decide on ESG purposes. The chapter argues that business leaders already retain a great deal of legal autonomy in deciding whether or not to adopt some version of an ESG purpose as part of the firm's overall purposes. The chapter then discusses the challenges posed by what the authors call the Political Liberal Problem, which seems to suggest that corporate leaders should refrain from promoting a particular view of the good on behalf of their constituents or stakeholders. The chapter contends that a normative sustainable view of ESG purpose depends crucially on the ability to defend the relatively autonomous moral judgment of business leaders in setting ESG strategy.
We examined how language affects moral judgments in a non-WEIRD population. Tanzanian participants (N = 103) evaluated utilitarian agents in moral dilemmas, either in native Chagga or foreign Swahili. Agents were rated significantly more moral and braver when evaluated in a foreign language. Bravery predicted morality more strongly in the foreign language than in the native language. Indirect sacrifices were judged more moral than direct ones, but equally brave. These findings extend the moral foreign language effect to informally acquired languages and highlight methodological implications for cross-cultural research.
This chapter of the handbook tackles a foundational question in moral psychology: What constitutes the moral domain? To answer this question, we first have to know how our minds determine right from wrong. The authors argue that our intuitive, culturally flexible perceptions of harm drive our judgments of the moral domain. This is not the dominant view of the moral domain, but the most popular models of the past and present need not be the most accurate ones. Instead, these paradigms reflect broader shifts in our values as a society and a field of research. The Turielian moral domain of the 1970s and 1980s took inspiration from the cognitive revolution, positing harm as a universal value that fully determines how people decide right from wrong. The Haidtian paradigm of today is influenced by the rise of cross-cultural psychology, arguing that harm is just one of many intuitive, culturally activated moral values. Ultimately, neither paradigm gets it completely right, but the authors argue that we can build a better paradigm by combining the strengths of each. In this model, harm is the key driver of moral judgments, but perceptions of harm are intuitive and culturally variable.
This chapter of the handbook introduces readers to the field of moral psychology as a whole and provides them with a guide to the volume. The authors delineate the landscape of morality in terms of five phenomena extensively studied by moral psychologists: moral behavior, moral judgments, moral sanctions, moral emotions, and moral communication, all against a background of moral standards. They then provide brief overviews of research on a few topics not assigned a dedicated chapter in the book (e.g., the moral psychology of artificial intelligence, free will, and moral responsibility), noting several other topics not treated in depth (e.g., the neuroscience of morality, links between moral and economic behavior, moral learning). In the last section of the chapter, the authors summarize each of the contributed chapters in the book.
This chapter of the handbook tackles the question of how first-person moral judgments and moral behavior are conceptually linked. The authors frame their discussion in terms of a philosophical puzzle known as “Hume’s Problem.” The puzzle arises from the conjunction of three ideas: Humeanism, the idea that beliefs alone do not suffice to motivate action; internalism, the idea that moral judgments are intrinsically motivating; and cognitivism, the idea that moral judgments are beliefs. These three ideas are jointly inconsistent, so at least one of them must be false. But which one? The authors focus their attention on two possible solutions to the puzzle: the externalist solution, which denies that moral judgments are intrinsically motivating (rescinding internalism); and the noncognitivist solution, which denies that moral judgments are beliefs (rescinding cognitivism). Based on the psychological and neuropsychological evidence bearing on these proposals, however, it appears that neither of these solutions to Hume’s Problem has solid empirical support.
This chapter of the handbook compares the major moral sanctioning behaviors of blame and punishment from two perspectives: their cultural history and their underlying psychology. The author draws a dividing line between two phases of human evolution – before and after human settlement – and proposes that, before that watershed, moral sanctions were informal, nonhierarchical, and often mild, akin to today’s acts of moral blame among intimates. Soon after settlement, hierarchies emerged, in which punishment took hold as a new form of sanctioning, typically exacted by those higher up in the hierarchy, and eventually by institutions of punishment. The author reviews the empirical evidence on the cognitive and social processes underlying each of these sanctioning tools and proposes that their distinct cultural histories are reflected in their psychological properties we can observe today. Whereas blame is, on the whole, flexible, effective, and cognitively sophisticated, punishment is often more damaging, less effective, and can easily be abused – as in past and modern forms of institutional punishment.
The Cambridge Handbook of Moral Psychology is an essential guide to the study of moral cognition and behavior. Originating as a philosophical exploration of values and virtues, moral psychology has evolved into a robust empirical science intersecting psychology, philosophy, anthropology, sociology, and neuroscience. Contributors to this interdisciplinary handbook explore a diverse set of topics, including moral judgment and decision making, altruism and empathy, and blame and punishment. Tailored for graduate students and researchers across psychology, philosophy, anthropology, neuroscience, political science, and economics, it offers a comprehensive survey of the latest research in moral psychology, illuminating both foundational concepts and cutting-edge developments.
Moral judgments are shaped by socialization and cultural heritage. Understanding how moral considerations vary across the globe requires the systematic development of moral stimuli for use in different cultures and languages. Focusing on Dutch populations, we adapted and validated two recent instruments for examining moral judgments: (1) the Moral Foundations Vignettes (MFVs) and (2) the Socio-Moral Image Database (SMID). We translated all 120 MFVs from English into Dutch and selected 120 images from SMID that primarily display moral, immoral, or neutral content. A total of 586 crowd-workers from the Netherlands provided over 38,460 individual judgments for both stimuli sets on moral and affective dimensions. For both instruments, we find that moral judgments and relationships between the moral foundations and political orientation are similar to those reported in the US, Australia, and Brazil. We provide the validated MFV and SMID images, along with associated rating data, to enable a broader study of morality.
People's judgements differ systematically while reading moral dilemmas in their native or their foreign language. This so-called Foreign Language Effect (FLE) has been found in many language pairs when tested with artificial, sacrificial moral dilemmas (i.e., Trolley and Footbridge). In Experiment 1, we investigated whether the FLE can be replicated in Turkish (native) – English (foreign) bilinguals using the same dilemmas (N = 203). These unrealistic and decontextualized dilemmas have been criticized for providing low external validity. Therefore, in Experiment 2, we (1) tested bilinguals with realistic scenarios which included the protagonist's age as a source of identity (child, adult, neutral), and (2) investigated the FLE in these scenarios (N = 467). Our results revealed that the FLE was not present in Turkish–English bilinguals, tested either on sacrificial dilemmas or realistic scenarios. Psychological distance of the scenarios, protagonists’ age and the perceived age similarity with the protagonist affected moral judgments.
This chapter outlines a theory of moral perception, describes a structural analogy between perception and action, and indicates how perception can provide an objective basis for moral knowledge. It is shown to have a basis in the kinds of grounds that underlie the moral properties to which moral perception responds, such as the violence of a face-slapping. With this outline of a theory of moral perception in view, the chapter describes the presentational phenomenal character of moral perception. Prominent in this presentationality is the phenomenological integration between our moral sensibility and our non-moral perception of the various kinds of natural properties that ground moral properties. Moral perception is possible without moral judgment but commonly yields it. It is also possible without moral emotion but may arise from it in some cases and evoke it in others. Many perceptually grounded judgments are justified; many also express empirical moral knowledge.
When confronted with moral dilemmas related to health, governments frequently turn to “moral experts,” such as bioethicists and moral philosophers, for guidance and advice. They commonly assume that these experts’ moral judgments are primarily a product of deliberate reasoning. The article challenges this assumption, arguing that experts’ moral judgments may instead be primarily a product of moral intuitions which, often subconsciously, respond to the social setting.
Trolley dilemmas were first developed by moral philosophers engaged in reflection on the ethics of permissible harm. But they have since become central to psychological research into morality. One reason why psychologists have paid so much attention to trolley dilemmas is that they see them as a key way to investigating the contrast between deontological and utilitarian approaches to ethics. This framing, however, departs from the original philosophical purpose of trolley dilemmas, and can lead psychological research astray. In this chapter, we question the assumption that trolley dilemmas can shed general light about the psychological bases of utilitarian decision-making. Some lay responses to trolley dilemmas that psychologists routinely classify as "utilitarian" in fact have little meaningful relation to what philosophers mean by this term. Even when what underlies lay responses to trolley dilemmas partly echoes aspects of a utilitarian approach to ethics, this doesn’t generalize to other moral domains, and tells us little about the psychological roots of other aspects of utilitarianism. Properly used, trolley dilemmas have a useful role to play in psychological research. But once we get clear about what we can, and cannot, learn from them, the current centrality of the trolley paradigm in moral psychology will seem overblown.
This article addresses frequently asked questions about “trolleyology,” scientific research using trolley dilemmas to probe the moral mind:What are trolley dilemmas? What is the Trolley Problem? Why should philosophers or scientists care about the Trolley Problem? What have we learned from scientific research using trolley dilemmas? Do trolley dilemmas help us understand utilitarianism or other moral philosophies? Do hypothetical trolley judgments predict real judgments? Does it matter if they don’t? What about the relationship between the scientific (descriptive) Trolley Problem and the philosophical (normative) Trolley Problem? Can science really tell us anything about what’s right or wrong? What’s new in Trolleyology these days? Do you have any concluding thoughts?
There has been widespread consensus amongst professional philosophers on responses to initial variants of the Trolley Problem. However, those philosophers have all been from Western, Educated, Industrialized, Rich, and Democratic societies. There is a growing literature that investigates whether judgments differ across cultures. If judgments differ, the implications for moral philosophy depend on whether this represents genuine moral disagreement and what are its causes. I survey the literature on cross-cultural variation in moral judgments in trolley problems. There is not much evidence and it is mixed. The higher acceptability of acting in Bystander compared to Footbridge is relatively consistent across cultures (small-scale societies may be an exception, but there is limited evidence); however, the level of acceptability of acting in the individual scenarios differs across cultures, especially in Footbridge. This preliminary inspection suggests that it is plausible that cross-cultural differences in judgments exist. Assessing their causes, they seem likely to be genuine moral disagreements, which result from differences in culture and institutions. This raises issues for the metaphysics and epistemology of moral judgments. If we are not to be skeptics about the existence of moral facts or the possibility of knowing them, then we may need to endorse a form of constructivist relativism.
The Trolley Problem is one of the most intensively discussed and controversial puzzles in contemporary moral philosophy. Over the last half-century, it has also become something of a cultural phenomenon, having been the subject of scientific experiments, online polls, television programs, computer games, and several popular books. This volume offers newly written chapters on a range of topics including the formulation of the Trolley Problem and its standard variations; the evaluation of different forms of moral theory; the neuroscience and social psychology of moral behavior; and the application of thought experiments to moral dilemmas in real life. The chapters are written by leading experts on moral theory, applied philosophy, neuroscience, and social psychology, and include several authors who have set the terms of the ongoing debates. The volume will be valuable for students and scholars working on any aspect of the Trolley Problem and its intellectual significance.
Five studies demonstrated that people selectively use general moral principles to rationalize preferred moral conclusions. In Studies 1a and 1b, college students and community respondents were presented with variations on a traditional moral scenario that asked whether it was permissible to sacrifice one innocent man in order to save a greater number of people. Political liberals, but not relatively more conservative participants, were more likely to endorse consequentialism when the victim had a stereotypically White American name than when the victim had a stereotypically Black American name. Study 2 found evidence suggesting participants believe that the moral principles they are endorsing are general in nature: when presented sequentially with both versions of the scenario, liberals again showed a bias in their judgments to the initial scenario, but demonstrated consistency thereafter. Study 3 found conservatives were more likely to endorse the unintended killing of innocent civilians when Iraqis civilians were killed than when Americans civilians were killed, while liberals showed no significant effect. In Study 4, participants primed with patriotism were more likely to endorse consequentialism when Iraqi civilians were killed by American forces than were participants primed with multiculturalism. However, this was not the case when American civilians were killed by Iraqi forces. Implications for the role of reason in moral judgment are discussed.
Trolley problems have been used in the development of moral theory and the psychological study of moral judgments and behavior. Most of this research has focused on people from the West, with implicit assumptions that moral intuitions should generalize and that moral psychology is universal. However, cultural differences may be associated with differences in moral judgments and behavior. We operationalized a trolley problem in the laboratory, with economic incentives and real-life consequences, and compared British and Chinese samples on moral behavior and judgment. We found that Chinese participants were less willing to sacrifice one person to save five others, and less likely to consider such an action to be right. In a second study using three scenarios, including the standard scenario where lives are threatened by an on-coming train, fewer Chinese than British participants were willing to take action and sacrifice one to save five, and this cultural difference was more pronounced when the consequences were less severe than death.
Understanding whether preferences are sensitive to the frame has been a major topic of debate in the last decades. For example, several works have explored whether the dictator game in the give frame gives rise to a different rate of pro-sociality than the same game in the take frame, leading to mixed results. Here we contribute to this debate with two experiments. In Study 1 (N=567) we implement an extreme dictator game in which the dictator either gets $0.50 and the recipient gets nothing, or the opposite (i.e., the recipient gets $0.50 and the dictator gets nothing). We experimentally manipulate the words describing the available actions using six terms, from very negative (e.g., stealing) to very positive (e.g., donating) connotations. We find that the rate of pro-sociality is affected by the words used to describe the available actions. In Study 2 (N=221) we ask brand new participants to rate each of the words used in Study 1 from “extremely wrong” to “extremely right”. We find that these moral judgments can explain the framing effect in Study 1. In sum, our studies provide evidence that framing effects in an extreme Dictator game can be generated using morally loaded language.
The prominent dual process model of moral cognition suggests that reasoners intuitively detect that harming others is wrong (deontological System-1 morality) but have to engage in demanding deliberation to realize that harm can be acceptable depending on the consequences (utilitarian System-2 morality). But the nature of the interaction between the processes is not clear. To address this key issue we tested whether deontological reasoners also intuitively grasp the utilitarian dimensions of classic moral dilemmas. In three studies subjects solved moral dilemmas in which utilitarian and deontological considerations cued conflicting or non-conflicting decisions while performing a demanding concurrent load task. Results show that reasoners’ sensitivity to conflicting moral perspectives, as reflected in decreased decision confidence and increased experienced processing difficulty, was unaffected by cognitive load. We discuss how these findings argue for a hybrid dual process model interpretation in which System-1 cues both a deontological and utilitarian intuition.
The Moral Foundations Vignettes (MFVs) – a recently developed set of brief scenarios depicting violations of various moral foundations – enables investigators to directly examine differences in moral judgments about different topics. In the present study, we adapt the MFV instrument for use in the Portuguese language. To this end, the following steps were performed: 1) Translation of the MFV instrument from English to Portuguese language in Brazil; 2) Synthesis of translated versions; 3) Evaluation of the synthesis by expert judges; 4) Evaluation of the MFV instrument by university students from Sao Paulo City; 5) Back translation; and lastly, 6) Validation study, which used a sample of 494 (385f) university students from Sao Paulo city and a set of 68 vignettes, subdivided into seven factors. Exploratory analyses show that the relationships between the moral foundations and political ideology are similar to those found in previous studies, but the severity of moral judgment on individualizing foundations tended to be significantly higher in the Sao Paulo sample, compared to a sample from the USA. Overall, the present study provides a Portuguese version of the MFV that performs similarly to the original English version, enabling a broader examination of how the moral foundations operate.