Skip to main content Accessibility help
×
Hostname: page-component-5cf477f64f-rdph2 Total loading time: 0 Render date: 2025-04-02T07:42:46.878Z Has data issue: false hasContentIssue false

1 - Modern Moral Psychology

A Guide to the Terrain

Published online by Cambridge University Press:  20 February 2025

Bertram Malle
Affiliation:
Brown University, Rhode Island
Philip Robbins
Affiliation:
University of Missouri

Summary

This chapter of the handbook introduces readers to the field of moral psychology as a whole and provides them with a guide to the volume. The authors delineate the landscape of morality in terms of five phenomena extensively studied by moral psychologists: moral behavior, moral judgments, moral sanctions, moral emotions, and moral communication, all against a background of moral standards. They then provide brief overviews of research on a few topics not assigned a dedicated chapter in the book (e.g., the moral psychology of artificial intelligence, free will, and moral responsibility), noting several other topics not treated in depth (e.g., the neuroscience of morality, links between moral and economic behavior, moral learning). In the last section of the chapter, the authors summarize each of the contributed chapters in the book.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

The term moral psychology is commonly used in at least two different senses. In the history of philosophy, moral psychology has referred to a branch of moral philosophy that addresses conceptual and theoretical questions about the psychological basis of morality, often (but not always) from a normative perspective (Tiberius, Reference Tiberius2015). In the empirical investigations of psychology, anthropology, sociology, and adjacent fields, moral psychology has examined the cognitive, social, and cultural mechanisms that serve moral judgment and decision making, including emotions, norms, and values, as well as biological and evolutionary contributions to the foundations of morality. Since 2010, over six thousand articles in academic journals have investigated the nature of morality from a descriptive-empirical perspective, and this is the perspective the handbook emphasizes. Our overarching goal in this volume, however, is to bring philosophical and psychological perspectives on moral psychology into closer contact while maintaining a commitment to empirical science as the foundation of evidence. Striving toward this goal, we have tried to cast a wide net of questions and approaches, but naturally we could not cover all topics, issues, and positions. We offer some guidance to omitted topics later in this introduction, which we hope will allow the reader to take first steps into those additional domains.

The chapters try to strike a balance between being up to date in a fast-moving field and making salient insights that have garnered attention for an extended time. The reader may consult some of the main journals publishing on moral psychology to follow the latest research in the field. Table 1.1 lists the journals that, in a recent database search, published the largest number of articles on “moral psychology.” We see frequent contributions from journals in social and cognitive psychology but also from generalist and philosophy journals.

Table 1.1 Journals that publish the largest proportion of research on moral psychology

  • 10 general journals (in alphabetical order)

  • Cognition

  • Developmental Psychology

  • Frontiers in Psychology

  • Journal of Experimental Psychology: General

  • Journal of Experimental Social Psychology

  • Journal of Personality and Social Psychology

  • Personality and Individual Differences

  • Personality and Social Psychology Bulletin

  • PLoS ONE

  • Social Psychological and Personality Science

  • Next 10

  • British Journal of Social Psychology

  • Emotion

  • European Journal of Social Psychology

  • Journal of Applied Psychology

  • Judgment and Decision Making

  • Philosophical Psychology

  • Psychological Science

  • Social Cognitive and Affective Neuroscience

  • Social Psychology

  • Zeitschrift für Sozialpsychologie

  • Applied or Domain Focus

  • Ethics & Behavior

  • Journal of Business Ethics

  • Journal of Moral Education

  • Nursing Ethics

  • Psychological Trauma

  • Social Science & Medicine

  • Traumatology

  • Additionally: Theoretical Focus

  • Ethics

  • Journal of Philosophy

  • Mind and Language

  • Personality and Social Psychology Review

  • Philosophical Review

  • Psychological Review

  • Review of Philosophy and Psychology

Note. The first three groups stem from a search for keyword moral* conducted July 31, 2023, on all Academic Search Premier databases, showing the peer-reviewed journals that published the highest number of empirical articles between 2010 and 2023. The fourth category is a listing of important outlets for theoretical work in the field.

As several of the chapters illustrate, much research in moral psychology has been informed in one way or another by the work of moral philosophers. For this reason, it may be helpful for the reader to bear in mind the theoretical perspectives on morality that tend to dominate the philosophical literature in ethics and metaethics. Four of these perspectives have been especially influential in moral psychology. First, there is act utilitarianism, the idea that right actions are those actions that have the best consequences, as measured by aggregate utility (Mill, Reference Mill1998). Second, there is deontology, according to which the moral permissibility of an action is determined by whether it conforms to a set of abstract rules, such as the rule that people should be treated as ends in themselves, rather than solely as means to an end (Kant, Reference Kant and Gregor1785/1998). The contrast between act utilitarian and deontological commitments is vividly illustrated by sacrificial dilemma cases, in which prioritizing the good of the many would violate the rights of the one (Thomson, Reference Thomson1985). A third option, which effectively splits the difference between act utilitarianism and deontology, is rule utilitarianism, according to which an action is morally right just in case it is required by an optimific social rule, that is, a rule that would tend to maximize aggregate utility if everyone were to follow it. Finally, there is virtue ethics, which shifts the focus from how people should behave to what sort of character traits people should cultivate (namely, the virtues). On this view, moral standards for behavior are determined by what a hypothetical virtuous person or persons would do in the context: Right actions are actions that all virtuous people would do, wrong actions are actions that no virtuous person would do, and merely permissible (i.e., neither right nor wrong) actions are actions that some virtuous people would do.

We put no constraints on authors to align themselves with a particular metaethical position. Some of the chapters could be assigned to well-known positions that have influenced the field: utilitarian (Baron, in Chapter 8; Niemi & Nichols, in Chapter 7), deontological (Andrighetto & Vriens, in Chapter 4; Malle, in Chapter 15), and virtue-based (Narvaez, in Chapter 17). Other chapters do not take any particular position but speak to topics relevant to those positions (Demaree-Cotton & Kahane, in Chapter 5; Goodwin & Landy, in Chapter 2; Robbins, in Chapter 9; Shweder et al., in Chapter 20).

1.1 The Landscape of Morality

The broad topic of morality encompasses a variety of more specific phenomena, such as moral judgment, moral decision making, moral emotions, moral norms, and more. In this section we briefly discuss and distinguish these different phenomena that make up the landscape of morality (following Bello & Malle, Reference Bello, Malle and Sun2023) and highlight which of the chapters speak to each of the phenomena.

Morality exists only against the background of a community’s moral standards. Moral judgment would not be moral judgment unless it is made relative to a set of moral standards, which are typically referred to as norms and values; the same holds for what makes decision making moral, what makes communication moral, and so on. Though all are embedded in norms and values, several phenomena of morality must be distinguished, and Figure 1.1 highlights some of the more important distinctions. There is diversity within each of the phenomena: Within moral communication, for instance, we would find forgiving, justifying, and praising, and numerous emotions have been considered moral emotions (e.g., guilt, outrage, contempt, and disgust). But the boundaries between the phenomena can be drawn in meaningful ways, at least to organize the sets of questions and psychological mechanisms under investigation.

Figure 1.1 The landscape of morality and its five major territories: moral behavior (including moral decision making), moral judgments (including multiple types, such as evaluation, wrongness, and blame), moral sanctions, moral emotions, and moral communication (expanded from Bello & Malle, Reference Bello, Malle and Sun2023, Figure 31.1).

Source: Sun, The Cambridge Handbook of Computational Cognitive Sciences, 2023 ©, published by Cambridge University Press, reproduced with permission.

1.1.1 Moral Behavior

Moral behavior includes intentional acts (often studied under the label moral decision making) but also unintended or negligent behavior. The processes underlying an agent’s moral behavior are distinct from the processes underlying an observer’s moral judgments of another person’s behavior. This distinction helps sharpen terms like moral sense (Marazziti et al., Reference Marazziti, Baroni, Landi, Ceresoli and Dell’Osso2013; Wilson, Reference Wilson1993), which can refer to behavioral phenomena such as altruism and moral disengagement or to evaluations and judgments of other people’s behavior (and sometimes one's own). Moral judgments and moral behaviors take the same norms into account and are responsive to similar kinds of information (e.g., justifying reasons, causal counterfactuals), but their underlying processes are distinct, and conclusions from one do not necessarily apply to the other.

Philosophy and psychology have long focused on moral decision making as the primary driver of moral behavior. Decision making is moral if it refers to choices between possible paths of action in light of moral norms. In principle, moral decisions are no different from other decisions (Zeelenberg et al., Reference Zeelenberg, Breugelmans, de Hooge, Innocenti and Sirigu2012). But because of the deep involvement of norms, moral decisions take on key properties of norms, including their substantial context sensitivity (Bartels et al., Reference Bartels, Bauman, Cushman, Pizarro, McGraw, Keren and Wu2015) and keen responsiveness to what the community thinks and does (Bicchieri, Reference Bicchieri2006). A long tradition of psychological work has also examined moral decision making in light of moral values and principles (Kohlberg, Reference Kohlberg1981; Schwartz, Reference Schwartz and Zanna1992). Although such abstract guides can undoubtedly influence concrete moral decisions (and the justifications of those decisions), the question of whether a particular principle applies to a given problem is still guided by context-specific normative considerations. In fact, details in the setting and type of action under considerations can distinctly affect which moral principles dominate other principles (Christensen & Gomila, Reference Christensen and Gomila2012).

However, moral decisions – intentional acts by their nature – are only one part of the territory of moral behavior. Many morally significant behaviors are unintentional (such as negligence, recklessness, preventable accidents, or unintended side effects), and moral communities respond strongly to such behaviors (Laurent et al., Reference Laurent, Nuñez and Schweitzer2016; Monroe & Malle, Reference Monroe and Malle2019). These responses, in turn, constitute the second major territory of the moral landscape: moral judgments.

1.1.2 Moral Judgments

When people make a moral judgment, they appraise an object in light of moral norms. These appraisals differ considerably depending on the object of appraisal – an event, behavior, or person – and the information that guides the appraisal – about an action, its reasons, caused outcomes, counterfactuals, and more (Alicke, Reference Alicke2000; Cushman, Reference Cushman2008; Malle, Reference Malle2021). In the philosophical literature, the term moral judgment often refers to one of these kinds: first-person appraisals that a behavior one might perform is right or wrong (e.g., Ratoff & Roskies, Chapter 3 in this volume). In this meaning of the term, moral judgment directly underlies moral (intentional) action. We follow here the broader use of the term (more common in empirical moral psychology), in which moral judgments can refer to both first-person and third-person good–bad evaluations, norm judgments (what is prescribed or prohibited), and wrongness judgments, as well as blame judgments and character judgments (Malle, Reference Malle2021).

One might distinguish these kinds of moral judgments by their position in a processing hierarchy. Very often, the flow of information processing begins with the detection of a norm violation, so norms may already be cognitively activated when the other moral judgments are formed. The simplest and fastest judgments are evaluations (Yoder & Decety, Reference Yoder and Decety2014), followed by wrongness judgments (Cameron et al., Reference Cameron, Payne, Sinnott-Armstrong, Scheffer and Inzlicht2017); more complex are judgments of blame and character (Malle et al., Reference Malle, Guglielmo and Monroe2014; Murray et al., Reference Murray, O’Neill, Bridges, Sytsma and Irving2024), which build on the simpler ones. Another way to distinguish moral judgments is by the functions they serve when expressed in social settings. Norm judgments serve to persuade others to (not) take certain actions (“That’s not allowed!”), declare applicable norms (e.g., posted rules of conduct), and teach others (“The appropriate thing to do here is …”). Stating a behavior’s moral wrongness mainly serves to mark a behavior as a moral transgression, especially when it is seen as intentional. Blame, finally, criticizes, influences reputation, and regulates relationships (Coates & Tognazzini, Reference Coates and Tognazzini2012).

While blame has been investigated extensively, less research is available on praise. Praise and blame are by no means mirror images of one another, and scholars have documented numerous asymmetries between the two judgments (Bostyn & Roets, Reference Bostyn and Roets2016; Guglielmo & Malle, Reference Guglielmo and Malle2019; Hindriks, Reference Hindriks2008; Pizarro et al., Reference Pizarro, Uhlmann and Salovey2003). Both take into account the agent’s mental states and the performed behavior’s relation to relevant norms. But whereas blame tries to bring an agent who violated a norm back in line with the norm, praise identifies an action that exceeds normative expectations (Monroe et al., Reference Monroe, Dillon, Guglielmo and Baumeister2018) and rewards the agent for that action, helping to build social relationships (Anderson et al., Reference Anderson, Crockett and Pizarro2020).

1.1.3 Moral Sanctions

Aside from examining moral decisions and judgments, scholars have examined another class of responses to morally significant behavior: moral sanctions. Whereas moral judgments are typically considered in the perceiver’s head, moral sanctions are social acts that express a moral judgment, impose a cost on the transgressor, and regulate the transgressor’s and other people’s future behavior. Most prominent among sanctions is punishment, often cast as the backward-looking act of retribution (literally payback), said to fulfill a desire to hurt the transgressor (Goodwin & Gromet, Reference Goodwin and Gromet2014). However, punishment is more complex. First, punishment can be an act of affirming a norm system (Fehr & Fischbacher, Reference Fehr and Fischbacher2004) and of teaching (Cushman, Reference Cushman, Sterelny, Joyce, Calcott and Fraser2013); if done properly, it can maintain cooperation in a community (Fehr & Gächter, Reference Fehr and Gächter2000), mainly when it is accompanied by communication about the relevant norms (Andrighetto et al., Reference Andrighetto, Brandts, Conte, Sabater-Mir, Solaz and Villatoro2013). Second, many forms of punishment have emerged rather recently in cultural human history, primarily as institutional behavior regulation closely tied to the law (Cushman, Reference Cushman, Sterelny, Joyce, Calcott and Fraser2013; Malle, Chapter 15 in this volume). By contrast, everyday moral sanctions x are rarely as harsh and physical as the institutional ones; instead, they range from complaints (Drew, Reference Drew1998) and acts of moral criticism (Moisuc & Brauer, Reference Moisuc and Brauer2019) to shaming or exclusion (Panagopoulou-Koutnatzi, Reference Panagopoulou-Koutnatzi, Akrivopoulou and Garipidis2014). In further contrast to institutional punishment, these informal sanctions are often negotiable and can even be taken back, and they are subject to social scrutiny to ensure they are appropriate and fair (Friedman, Reference Friedman2013; Malle et al., Reference Malle, Guglielmo, Voiklis and Monroe2022).

1.1.4 Moral Emotions

Emotions play at least two roles in the landscape of morality. First, many scholars have identified so-called moral emotions, such as guilt, shame, or contempt (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003; Prinz & Nichols, Reference Prinz, Nichols and Doris2010; Tangney et al., Reference Tangney, Stuewig and Mashek2007). Which emotions fall under this special label has long been debated, and Russell (Chapter 10, this volume) examines in detail the possible criteria one might apply to such designations. Second, scholars have considered the role of emotions as either causes, concomitants, or consequences of moral judgments and decisions (Monin et al., Reference Monin, Pizarro and Beer2007). We see historical oscillations between opposing positions on this matter, between philosophers such as Kant and Hume as well as, more recently, between waves of empirical research, claiming moral judgments to be either primarily a matter of reason or primarily a matter of emotion. Over the past half century, early research cast moral judgment as deliberate, mature, and complex (Kohlberg, Reference Kohlberg1981). Then a revolution occurred in which morality was reframed as based primarily on evaluations, emotions, and unreasoned intuitions (Greene, Reference Greene and Sinnott-Armstrong2008; Haidt, Reference Haidt2001; Prinz, Reference Prinz2006). Over the past decade, several scholars have cast doubt on previous evidence for this “unreason” picture of moral judgment (Guglielmo, Reference Guglielmo2015; Royzman et al., Reference Royzman, Leeman and Baron2009; Sauer, Reference Sauer2012), provided new evidence for the significant role of cognition and reasoning (Guglielmo & Malle, Reference Guglielmo and Malle2017; Martin & Cushman, Reference Martin and Cushman2016; Monroe & Malle, Reference Monroe and Malle2019; Royzman et al., Reference Royzman, Goodwin and Leeman2011) and even evidence for the possible temporal precedence of moral cognition over emotion (Cusimano et al., Reference Cusimano, Thapa, Malle, Gunzelmann, Howes, Tenbrink and Davelaar2017; Yang et al., Reference Yang, Yan, Luo, Li, Zhang, Tian and Zhang2013). Models that ascribe primary causality to affect and emotion have been called out as underspecified (Huebner et al., Reference Huebner, Dwyer and Hauser2009; Mikhail, Reference Mikhail and Sinnott-Armstrong2008), and perhaps in response, information processing models have aimed to offer more theoretical detail while still allowing room for the role of emotion (Malle et al., Reference Malle, Guglielmo and Monroe2014; May, Reference May2018; Sauer, Reference Sauer2011). Interestingly, many researchers dedicated to the study of emotion per se (whether or not involved in morality) have developed models that integrate cognitive and affective processes (e.g., Scherer, Reference Scherer2009). A newly emerging position is that moral emotions have pronounced social functions, including to signal norm commitments and express moral judgments (Grappi et al., Reference Grappi, Romani and Bagozzi2013; Kupfer & Giner-Sorolla, Reference Kupfer and Giner-Sorolla2017; Sorial, Reference Sorial, Abell, Smith, Abell and Smith2016).

1.1.5 Moral Communication

This brings us to the final territory of morality, the tools and practices of communicating about moral norms, violations, and their sequelae. When a norm violation occurs, people almost automatically make moral judgments in their heads, but at least some of the time they express their moral criticism (Molho et al., Reference Molho, Tybur, Van Lange and Balliet2020; Przepiorka & Berger, Reference Przepiorka and Berger2016), ask transgressors to account for their actions (Semin & Manstead, Reference Semin and Manstead1983), and sometimes grant forgiveness even for grave atrocities (Gobodo-Madikizela, Reference Gobodo-Madikizela2002). Transgressors, on their part, will often try to explain or justify their violations (Gollan & Witte, Reference Gollan and Witte2008; Riordan et al., Reference Riordan, Marlin and Kellogg1983) and mitigate others’ criticism with remorse, apologies, or compensation (Tedeschi & Reiss, Reference Tedeschi, Reiss and Antaki1981; Yucel & Vaish, Reference Yucel and Vaish2021). Even though it is through communication that people typically regulate other community members’ moral behavior (Andrighetto et al., Reference Andrighetto, Brandts, Conte, Sabater-Mir, Solaz and Villatoro2013; Shank et al., Reference Shank, Kashima, Peters, Li, Robins and Kirley2019), we see overall less research in moral psychology dedicated to these social-communicative processes than to the cognitive processes that undergird them. In this handbook, therefore, one chapter (Funk & McGeer, Chapter 16) directly speaks to the important communicative sphere, and several others draw connections (Guan et al., Chapter 22; Malle, Chapter 15; Shweder et al., Chapter 20).

1.2 Guide to Additional Topics

Given the vast landscape of morality, this handbook cannot be a complete map of its territory. In this section, we provide pointers to topics that did not end up in the handbook but present exciting and valuable directions of work.

1.2.1 Moral Psychology of Artificial Agents

The first topic of interest, centering on facets of “artificial morality,” has seen a rapid rise over the past 10 years. Two recent reviews in the psychological literature took stock of some of the garnered insights (Bonnefon et al., Reference Bonnefon, Rahwan and Shariff2024; Ladak et al., Reference Ladak, Loughnan and Wilks2023), and several other reviews have surveyed some of the core questions and initial answers (Bigman et al., Reference Bigman, Waytz, Alterovitz and Gray2019; Malle, Reference Malle2016; Misselhorn, Reference Misselhorn2018; Pereira & Lopes, Reference Pereira and Lopes2020). The range of questions is broad: how to design machines that follow norms and make moral judgments and decisions (Cervantes et al., Reference Cervantes, López, Rodríguez, Cervantes, Cervantes and Ramos2020; Malle & Scheutz, Reference Malle, Scheutz and Bendel2019; Tolmeijer et al., Reference Tolmeijer, Kneer, Sarasua, Christen and Bernstein2021) and how humans do and will perceive such (potential) moral machines (Malle et al., Reference Malle, Scheutz, Arnold, Voiklis and Cusimano2015; Shank & DeSanti, Reference Shank and DeSanti2018; Stuart & Kneer, Reference Stuart and Kneer2021); legal and ethical challenges that come with robotics (Lin et al., Reference Lin, Abney and Bekey2011), such as challenges posed by social robots (Boada et al., Reference Boada, Maestre and Genís2021; Salem et al., Reference Salem, Lakatos, Amirabdollahian, Dautenhahn, Tapus, André, Martin, Ferland and Ammi2015), autonomous vehicles (Bonnefon et al., Reference Bonnefon, Shariff and Rahwan2016; Zhang et al., Reference Zhang, Wallbridge, Jones, Morgan and Stanton2021), autonomous weapons systems (Galliott et al., Reference Galliott, Macintosh and Ohlin2021), and large language models (Harrer, Reference Harrer2023; Yan et al., Reference Yan, Sha, Zhao, Li, Martinez-Maldonado, Chen, Li, Jin and Gašević2024); deep concerns over newly developed algorithms that perpetuate sexism, racism, or ageism; and tension over the use of robots in childcare, eldercare, and health care, which is both sorely needed and highly controversial (Sharkey & Sharkey, Reference Sharkey and Sharkey2010; Sio & Wynsberghe, Reference Sio and Wynsberghe2015).

Artificial agents raise a number of vexing philosophical questions, such as whether they could ever have consciousness or free will, whether those properties would be required to grant them rights and moral-legal standing, whether machines could ever be genuine moral agents (or moral patients), and many more. Work on artificial agents can also inform psychological theories of moral phenomena. For example, what features do artificial agents have to have in order for people to spontaneously impose norms on them, ascribe morally relevant mental states to them (e.g., justified reasons), and exchange moral emotions with them (e.g., forgiveness to reduce guilt)? Is evidence for deep psychological complexity necessary, or might mere humanlike appearance trigger fundamental moral responses? Finally, artificial agents provide opportunities to develop more precise theoretical and computational models of moral phenomena (e.g., norms, decisions), to test them first in simulations, and eventually in actual physical implementations. But as computationally more sophisticated designs begin to show sign of moral competence (Bello & Malle, Reference Bello, Malle and Sun2023; Conte et al., Reference Conte, Andrighetto and Campenni2013; Pereira & Lopes, Reference Pereira and Lopes2020), is there something lost by reducing moral judgments, decisions, and emotions to long strings of code?

1.2.2 Morality, the Self, and Identity

The nature of the self and identity has long been a central topic in social psychology (Leary et al., Reference Leary, Tangney and Leary2003; Suls, Reference Suls2014; Wylie, Reference Wylie1979) and philosophy (Metzinger, Reference Metzinger2004; Olson, Reference Olson1999; Parfit, Reference Parfit1992). There is also a rich literature connecting the study of morality to the self and identity. We can divide this literature into two main strands. The first strand connects morality and the self; the second strand connects morality and identity.

In social psychology, the word “self” is used to mean a variety of different things (Leary & Tangney, Reference Leary and Tangney2012). Two of the more common meanings are the executive self, which regulates an agent’s behavior (Baumeister & Vohs, Reference Baumeister, Vohs, Leary and Tangney2003), and the evaluative self, which is comprised of the thoughts and feelings people have about themselves, especially in relation to others (Tesser, Reference Tesser and Berkowitz1988). The executive self plays a key role in the production of moral behavior. Bandura (Reference Bandura1999) calls inhibitive moral agency the capacity to refrain from acting inhumanely toward others. This kind of behavioral self-regulation in turn depends upon the capacity for self-directed negative emotions (e.g., guilt, regret, shame), which motivate conformity to moral norms by making immoral behavior aversive to the agent (Bandura, Reference Bandura1999; Silver & Silver, Reference Silver and Silver2021). Indeed, when self-directed negative emotions are uncoupled from immoral behavior – whether through self-serving justification of the behavior, displacement of responsibility for its consequences, or dehumanization and blaming of the victims – the results can be literally catastrophic (e.g., war, mass murder, genocide) (Bandura, Reference Bandura1999).

The evaluative self can also be a significant determinant of moral behavior, since effective self-regulation requires the capacity for monitoring one’s own behavior and evaluating it in relation to moral standards. Such standards are an essential part of most people’s self-concept, and people are highly motivated to think of themselves as morally upright (Steele, Reference Steele and Berkowitz1988). In fact, research has shown that a moral self-concept emerges in early childhood and is a predictor of prosocial behavior (Christner et al., Reference Christner, Pletti and Paulus2020). Reflecting on oneself as a good person can indeed lead to more good behavior (Young et al., Reference Young, Chakroff and Tom2012), especially when such reflections link up to abstract values; more concrete recall of past good deeds can lead to opposite effects, called moral licensing (Merritt et al., Reference Merritt, Effron and Monin2010), whereby people become more prone to immoral behavior after engaging in behavior that boosts their moral self-esteem. Thus, the executive self and the evaluative self do not always pull in the same moral direction.

Morality is also intimately tied up with identity, in two key meanings of the term “identity.” Diachronic identity encompasses the features that ground the perceived persistence of persons over time. Multiple studies provide support for the idea that continuity of moral traits is seen as essential to the persistence of persons over time, insofar as someone who changes their moral stripes (or loses them altogether) is no longer seen as the same person (Hitlin, Reference Hitlin, Schwartz, Luyckx and Vignoles2011; Prinz & Nichols, Reference Prinz, Nichols and Doris2010; Strohminger & Nichols, Reference Strohminger and Nichols2014). Further support for this idea comes from studies showing that dramatic improvement of a person’s moral character tends to result in mitigation of blame and reduction of moral responsibility for past immoral behavior (Gomez-Lavin & Prinz, Reference Gomez-Lavin and Prinz2019) – almost as if another person had performed those behaviors.

Synchronic identity encompasses the features that determine (or seem to determine) who a person is at a given time. In particular, the “true self” refers to the features that are seen as essential to a person’s synchronic identity (Strohminger et al., Reference Strohminger, Knobe and Newman2017), and they are often features related to morality (see Goodwin & Landy, Chapter 2 in this volume). For example, whether an emotionally driven action is seen as expressive of an agent’s true self depends on whether the action is morally bad or morally good (Newman et al., Reference Newman, De Freitas and Knobe2015). Further, studies suggest that moral evaluation of a person’s behavior, including blame for immoral behavior and praise for moral behavior, is sensitive to perception of whether the behavior expresses the agent’s true self (Newman et al., Reference Newman, De Freitas and Knobe2015; Robbins & Alvear, Reference Robbins and Alvear2023).

1.2.3 Free Will and Moral Responsibility

Philosophers agree on two principles: first, moral judgments apply to an action only if the agent is morally responsible for performing it; second, agents are morally responsible only for those actions that they freely choose to perform. Beyond these points, the consensus tends to break down. For example, philosophers debate what it means for an action to be freely chosen and whether human actions ever meet that condition. One major source of disagreement is the assumption of causal determinism, shared by most philosophers – the idea that every event, including every human choice, is fully determined by the causal history of the world leading up to it. According to one view, an action is freely chosen just in case the agent could have made a different choice even if every link in the causal chain of events leading up to the actual choice had been the same. On this view, known as incompatibilism, the existence of free will – and by extension, that of moral responsibility – is incompatible with causal determinism. According to an alternative view, an action is freely chosen if the choice was free of certain types of constraints, either external (e.g., coercion) or internal (e.g., compulsion). On this second view, known as compatibilism, the existence of free will – and by extension, that of moral responsibility – is compatible with causal determinism.

But those are just the views of philosophers. What do ordinary people think about these matters? How do they make sense of the ideas of free will, moral responsibility, and causal determinism? Do ordinary people find incompatibilism more intuitive than compatibilism, or the other way around? These questions have animated empirical research with the goal of identifying the psychological origins of a philosophical problem as old as philosophy itself (Nichols, Reference Nichols2011).

Regarding the commonsense concept of free will, the preponderance of evidence suggests that ordinary people do not think of free will in the metaphysically demanding way presupposed by incompatibilism (Monroe et al., Reference Monroe, Brady and Malle2017; Monroe & Malle, Reference Monroe and Malle2010; Nahmias et al., Reference Nahmias, Morris, Nadelhoffer and Turner2005; Vonasch et al., Reference Vonasch, Baumeister and Mele2018). In commonsense thinking, free choice is a matter of freedom from the kinds of local constraints that make it difficult for an agent to express their values and commitments (Woolfolk et al., Reference Woolfolk, Doris and Darley2006). This ordinary concept of free will corresponds most closely to the compatibilist one, according to which the metaphysical issue of causal determinism is irrelevant to the reality of free will (Strawson, Reference Strawson1962). This is not altogether surprising, given that the concept of causal determinism is sufficiently esoteric that it may be difficult for people without philosophical training to understand what incompatibilists are worried about (Sommers, Reference Sommers2010).

By contrast, empirical studies of ordinary people’s intuitions about moral responsibility have yielded mixed results. Alongside evidence of the intuitive appeal of compatibilism about moral responsibility, together with evidence for the idea that incompatibilist intuitions result from confusing causal determinism with fatalism (Nahmias et al., Reference Nahmias, Coates and Kvaran2007; Nahmias et al. Reference Nahmias, Shepard and Reuter2014), there is evidence for the opposite view, as well as support for the idea that compatibilist intuitions about moral responsibility result from affective bias (Nichols & Knobe, Reference Nichols and Knobe2007). Making sense of the diversity of findings in this area, much of it originating in work by experimental philosophers, is an ongoing project in moral psychology. The same applies to research on the effect of free will beliefs on moral behavior, some of which suggests that disbelief in free will (in the metaphysically robust sense presupposed by incompatibilism) is associated with a greater propensity for aggression and dishonesty (Vohs & Schooler, Reference Vohs and Schooler2008), whereas other studies find no evidence for these claims (Open Science Collaboration, 2015). Likewise, there is some empirical support for the idea that disbelief in free will makes people more punitive (Krueger et al., Reference Krueger, Hoffman, Walter and Grafman2013), but efforts to replicate such results have failed (Monroe et al., Reference Monroe, Dillon and Malle2014). In fact, a recent meta-analysis of 145 experiments showed that manipulating free will beliefs has few, if any, downstream consequences (Genschow et al., Reference Genschow, Cracco, Schneider, Protzko, Wisniewski, Brass and Schooler2023).

1.2.4 Other Topics Yet

The chapters included in this handbook touch on numerous other exciting strands of moral psychology that did not receive a dedicated chapter to review their full respective literatures. For example, chapters by Decety (Chapter 11) and by FeldmanHall and Vives (Chapter 12) engage with the affective and cognitive neuroscience of morality, chapters by Narvaez (Chapter 17) and by Baird and Matthews (Chapter 19) connect to its neurobiological underpinnings. The reader may consult additional recent work that uses insights from neuroscience to analyze long-standing philosophical issues, such as free will, consciousness, and rationalism of moral judgment (Castro-Toledo et al., Reference Castro-Toledo, Cerezo and Gómez-Bellvís2023; May, Reference May2023).

Likewise, methods and insights from behavioral economics appear in chapters by FeldmanHall and Vives (Chapter 12), Niemi and Nichols (Chapter 7), and Purzycki and Bendixen (Chapter 23), and the reader may want to explore additional work on the interplay between economic and moral behavior (Vila‐Henninger, Reference Vila‐Henninger2021) and on the moral impact of exposure to market processes (Bartling & Özdemir, Reference Bartling and Özdemir2023; Enke, Reference Enke2023; Fike, Reference Fike2023). The connection between behavioral economics paradigms and computational and cognitive neuroscience measures is another interesting recent direction (Fornari et al., Reference Fornari, Ioumpa, Nostro, Evans, De Angelis, Speer, Paracampo, Gallo, Spezio, Keysers and Gazzola2023; Lengersdorff et al., Reference Lengersdorff, Wagner, Lockwood and Lamm2020).

Evolutionary perspectives on the origins of morality are distributed over chapters by Narvaez (Chapter 17), Shweder et al. (Chapter 20), and Malle (Chapter 15), whereby the latter two focus on cultural rather than biological perspectives. Animal behavior work arises in chapters by Decety (Chapter 11) as well as FeldmanHall and Vives (Chapter 12), and the reader may benefit from integrative perspectives on phylogenetic and cultural evolution by Boehm (Reference Boehm2018), de Waal (Reference de Waal2014), and Tomasello (Reference Tomasello2016), and a provocative recent proposal that links genetic heritability patterns to domains of cooperative morality (Zakharin et al., Reference Zakharin, Curry, Lewis and Bates2024).

Additional topics with less representation but no less significance include the group dynamics of morality (Ellemers et al., Reference Ellemers, Pagliaro and Nunspeet2023), moral learning (Cushman et al., Reference Cushman, Kumar and Railton2017), trust (Bach et al., Reference Bach, Khan, Hallock, Beltrão and Sousa2022; Malle & Ullman, Reference Malle, Ullman, Nam and Lyons2021; Sztompka, Reference Sztompka and Sasaki2019), and morality in organizations and collectives (Blomberg & Petersson, Reference Blomberg and Petersson2024; Dhillon & Nicolò, Reference Dhillon, Nicolò, Basu and Mishra2023; Sattler et al., Reference Sattler, Dubljević and Racine2023).

1.3 Overview of the Chapters

We now offer brief summaries of each handbook chapter, hoping that the reader will find many of these contributions enticing for further reading.

1.3.1 Part I: Building Blocks

Part I introduces some of the basic building blocks of moral psychology, topics of both core theoretical concern and major historical significance.

Geoff Goodwin and Justin Landy (Chapter 2) review empirical research on moral character, which has only recently attained a prominent role in psychology, in contrast to long traditions in ethics and education. A person’s moral character comprises the dispositions to think, feel, and act morally, and these dispositions are cross-situationally and temporally fairly consistent. Against a long-standing belief in psychology that the personality disposition of warmth most strongly influences people’s impressions of one another, the evidence suggests that moral character occupies this central position. Moral character exerts its influence on impressions quite independently of other personality traits, and it features prominently in people’s representations of their own personality as well. Moral character is also a central element in a person’s perceived identity – who the person is perceived to be “deep down” (cf. our discussion of the “true self” in the Morality, the Self, and Identity subsection [Section 1.2.2]). Finally, the authors close by charting some of the features from which people infer another’s moral character, including actions but also, critically, mental states such as goals and intentions.

William Ratoff and Adina Roskies (Chapter 3) tackle the question of how first-person moral judgments and moral behavior are conceptually linked. They frame their discussion in terms of a philosophical puzzle known as “Hume’s problem.” The puzzle arises from the conjunction of three ideas: Humeanism, the idea that beliefs alone do not suffice to motivate action; internalism, the idea that moral judgments are intrinsically motivating; and cognitivism, the idea that moral judgments are beliefs. These three ideas are jointly inconsistent, so at least one of them must be false. But which one? The authors focus their attention on two possible solutions to the puzzle: the externalist solution, which denies that moral judgments are intrinsically motivating (rescinding internalism), and the noncognitivist solution, which denies that moral judgments are beliefs (rescinding cognitivism). The authors review empirical research to explore whether either of the solutions is supported by evidence. On the issue of whether moral judgments are intrinsically motivating, they argue that studies of moral cognition in psychopathy and acquired sociopathy do not settle the matter, nor do studies of folk intuitions about internalism. Likewise, studies of the influence of emotion on moral judgment do not settle the dispute between cognitivism and noncognitivism, since they do not establish that emotion is constitutive of moral judgment in the way that noncognitivism requires. Thus, an empirically compelling solution to Hume’s problem remains to be found.

Giulia Andrighetto and Eva Vriens (Chapter 4) examine the foundational role of norms in moral psychology, a topic that has long garnered cross-disciplinary interest from philosophy to biology, from anthropology to computer science. The authors touch briefly on the debates over potentially different types of norm (e.g., conventional, social, moral, legal) and maintain that social and moral norms, in particular, are difficult to separate unless one adopts a specific theoretical position. The authors’ treatment centers on a core feature of most or all social and moral norms: that people, in complying or not complying with norms, are sensitive to other community members’ norm-relevant beliefs and attitudes. By recognizing this sensitivity, scientists can, first, gain a better scientific understanding of norm inference, the complex processes by which people learn which norms apply to a given setting and how strong the norms are; and second, they can better diagnose whether (and how strongly) a given norm actually exists in a community. All these insights pave the way for potential interventions on people’s beliefs about the community’s norms, which are easier to change than individual moral convictions.

Joanna Demaree-Cotton and Guy Kahane (Chapter 5) introduce a frequently discussed topic in recent moral psychology: moral dilemmas. They characterize moral dilemmas as a decision-making situation that has three features: first, every available course of action has a high moral cost and therefore involves a difficult moral trade-off; second, it is morally appropriate for the agent to feel conflicted about what choice to make; and third, it is morally appropriate for the agent to feel some regret about whatever choice they made. The authors then explore different empirical accounts of why some moral trade-offs, but not others, are experienced as difficult or impossible to resolve. Among the most influential of these accounts is Greene’s (Reference Greene and Sinnott-Armstrong2008) dual-process theory, which traces the experience of moral dilemmas to a conflict between a value backed by intuition (“System 1”) and a value backed by reflection (“System 2”). The authors also review empirical research bearing on the psychological mechanisms underpinning a person’s resolution of moral dilemmas and the phenomenon of “moral residue” (regret or guilt over one’s resolution). They argue that further empirical work is needed to understand how people weigh competing values against one another and that such understanding requires expanding the range of moral dilemmas to include cases beyond those targeted in recent research (e.g., sacrificial dilemmas).

Samantha Abrams and Kurt Gray (Chapter 6) tackle another foundational question: What constitutes the moral domain? To answer this question, they explore three approaches to modeling moral cognition, focusing on three issues: first, what behaviors are seen as morally wrong; second, whether moral norms are universal rather than culturally variable; and third, what psychological mechanisms underlie judgments of moral wrongness. According to Turiel’s model, wrong behaviors are those seen as harmful or unfair, moral norms are universal, and wrongness judgments are largely the result of conscious reasoning from abstract principles. By contrast, in Haidt’s model, wrong behaviors are not just those seen as harmful or unfair, but also those seen as disloyal, disrespectful of authority, or impure; moral norms exhibit substantial cross-cultural variation; and wrongness judgments are typically the product of intuition, rather than conscious reasoning. The model favored by the authors combines elements from both of these approaches: from Turiel, the idea that perceptions of wrongness boil down to perceptions of harm; and from Haidt, the idea that moral norms are culturally variable and the idea that wrongness judgments are more a product of intuition than reasoning.

1.3.2 Part II: Thinking and Feeling

Part II focuses on the cognitive and affective processes that make up various moral phenomena: moral decision making, moral judgment, the categorization of agents and patients, and moral emotions.

Laura Niemi and Shaun Nichols (Chapter 7) introduce some core elements of moral decision making by taking expected utility theory as a starting point. In its classic form, expected utility theory focuses on the outcomes of actions: The expected utility of a decision is the sum of the values associated with the different possible outcomes of the decision weighted by the probability of their occurrence. As such, expected utility theory is well suited to explain the moral choices recommended by utilitarianism, which characterizes right actions in terms of the maximization of aggregate utility. However, to account for more complex, nonutilitarian decisions, expected utility theory must be modeled to assign utilities to actions themselves. This action-based form of expected utility theory can readily accommodate the fact that people tend to assign low utility to actions that violate moral norms (even when the outcomes of those actions might have positive utility, such as when lying would lead to financial gain). The authors then apply this expanded action-based expected utility theory of moral decision making to questions regarding what actions count as fair, how the decision maker’s actions take other people’s outcomes into account, and how the value of actions changes when directed at one’s own or another’s group.

Jonathan Baron (Chapter 8) posits utilitarianism as a standard of rational moral judgment. He does not directly defend utilitarianism as a theory but investigates cases of apparent contradiction between people’s moral decisions (sometimes grounded in nonutilitarian principles) and the consequences of those decisions that they themselves would consider worse for themselves and everybody else. For example, when some people use a moral principle (e.g., bodily autonomy) to assertively make a decision (e.g., to not get vaccinated), it can have negative moral consequences for others (e.g., infecting people) and for themselves (risking infection). Baron asks whether such contradictions in moral reasoning can provide insights into some of the determinants of such reasoning. These insights, importantly, are valuable even for those who do not adopt utilitarianism as a normative model. From over a dozen candidate moral contradictions, Baron concludes that many deviations from utilitarian considerations in moral contexts are reflections of familiar nonmoral cognitive biases (e.g., framing effects, certain concepts of causality), but some arise from adherence to strong moral rules or principles (e.g., protected or sacred values).

Philip Robbins (Chapter 9) discusses the role of mind perception in the categorization of individuals as moral agents and moral patients. Moral agents are defined as individuals who can commit morally wrong actions and deserve to be held accountable for those actions; moral patients are defined as individuals who can be morally wronged and whose interests are worthy of moral consideration. It is generally agreed that the attribution of moral agency and moral patiency is linked to the attribution of mental capacities. Robbins surveys a variety of models of mind perception, some of which focus on the representation of mental capacities, some of which focus on the representation of mental traits. The dominant model of mind perception in moral psychology is the experience–agency model (Gray et al., Reference Gray, Gray and Wegner2007), which divides the space of mindedness into experiential capacities like sentience and self-awareness, and agentic capacities like deliberative reasoning and self-control. Reviewing the empirical literature on moral categorization, Robbins argues that neither the experience–agency model nor any of the major alternatives to it (i.e., the warmth–competence model, the agency–communion model, and the human nature–human uniqueness model), captures the full panoply of mental features to which everyday attributions of moral agency and moral patiency are sensitive.

Pascale Sophie Russell (Chapter 10) asks whether, and in what ways, emotions can be designated as “moral.” Several emotions have been shown to be associated with moral judgments or moral behaviors. But more than association must be shown if we label some emotions characteristically moral. Russell guides the reader through a voluminous literature and applies two criteria to test the moral credentials of emotions. The first criterion is whether the emotion is significantly elicited by moral stimuli (e.g., transgressions); the second is whether it has significant community-benefiting consequences. This second criterion, less often used in past analyses, tries to capture the fact that moral norms, judgments, and decisions are all intended to benefit the community, so moral emotions should too. From this analysis, the author concludes that anger clearly meets the criteria, contempt and disgust less so. Guilt passes easily, and shame fares better than some may expect. Among the positive candidates, compassion and empathy both meet the criteria but are somewhat difficult to separate. Finally, elevation and awe have numerous prosocial consequences, but awe is rarely triggered by moral stimuli.

Jean Decety (Chapter 11) examines the complex relation between empathy and prosocial behavior and considers findings from animal behavior, neuroscience, and psychological studies. He begins by distinguishing three components of the broader phenomenon of empathy: emotional contagion, empathic concern, and perspective taking. He reviews evidence suggesting that emotional contagion of a conspecific’s pain often leads to helping behavior, but such contagion is modulated by group membership, levels of intimacy, and attitudes toward the other. Thus, contagion is not an automatic trigger for prosocial behavior. Empathic concern, too, is a powerful motivator of prosocial behaviors but is also socially modulated – extended to some people more than others and to individuals more than groups. Effortful perspective taking, finally, can provide a better understanding of other people’s minds but does not always generate prosocial behavior, even when it facilitates empathic concern. In sum, various forms of empathy can motivate prosocial behaviors, but empathy is fragile and often stops short of its potential when people engage with large groups, people outside of their tribe, or anonymous strangers.

1.3.3 Part III: Behavior

Part III focuses on some of the central classes of behaviors that scholars of morality have puzzled over: prosocial behavior, antisocial behavior, conflict, and dehumanization. It also examines the primary moral sanctioning behaviors (blame and punishment) by which humans respond to moral violations, and it ends on the topic of moral communication.

Oriel FeldmanHall and Marc-Lluís Vives (Chapter 12) highlight that, for successful social living, humans’ capacity to be prosocial had to surpass their capacity for selfish and harmful behavior. The authors provide an overview of the scientific study of prosocial capacities, with a focus on experimental research. Summarizing extensive work in laboratory paradigms of behavioral economics and social psychology, the authors document a strong human tendency toward behaving prosocially. They then briefly examine the phyologenetic and developmental origins of behaving prosocially and its different motives, such as reputational concerns and caring for others, as well as emotions that facilitate prosocial behavior, such as empathy or guilt. (See Decety’s chapter [Chapter 11] on empathy for a more comprehensive treatment of empathy.) FeldmanHall and Vives also summarize insights from cognitive neuroscience on the brain networks that undergird prosocial behavior. They close with a call for more naturalistic experimental paradigms and the consideration of temporal dynamics of prosocial behavior.

Kean Poon and Adrian Raine (Chapter 13) provide the counterweight to FeldmanHall and Vives by inspecting the relationship between antisociality and morality from the dual perspectives of moral psychology and moral neuroscience. Their chapter provides a comprehensive overview of research on the moral cognition of different types of antisocial individuals, focusing on the interplay between cognition and emotion in psychopathic individuals. Based on their review of the research, the authors suggest that the capacity for moral reasoning in psychopathy is less defective than generally assumed. While the propensity of psychopathic individuals to engage in immoral behavior is due largely to affective deficits (e.g., low empathy), it also stems from dysfunction in the neural circuitry underlying moral decision making. This simple narrative, however, is complicated by the fact that there is no single explanation of the immoral behavior exhibited by the full range of antisocial individuals. For example, while dysfunction in the neural circuitry of moral decision making may account for the immoral behavior of individuals with primary psychopathy and individuals prone to proactive (i.e., instrumental) aggression, it is less apt for explaining similar behavior by individuals with secondary psychopathy and a propensity for reactive aggression.

Nick Haslam (Chapter 14) introduces dehumanization as another dark side of humanity. Humanness is a central concept in moral psychology, and whereas people normally treat other humans with moral consideration, they may turn to dehumanize others as a result of moral disengagement (loosening ordinary moral inhibitions) and moral exclusion (no longer applying norms of justice, fairness, and compassion to others). Haslam reviews recent psychological accounts of dehumanization that are grounded in empirical research and highlights several common threads: Dehumanization varies from subtle to extreme (e.g., genocide), interpersonal to intergroup, and from contexts of mere perception to contexts of severe conflict. In these theoretical accounts, dehumanizing a person or group means ascribing less of certain human attributes to the target – both attributes that distinguish humans from other animals (e.g., intellect, rationality, or civility) and attributes that distinguish humans from inanimate agents (e.g., essential capacities for emotion and warmth). Haslam’s analysis meshes with that of Abrams and Gray (Chapter 6, this volume) and the discussion by Robbins (Chapter 9, this volume) of mental capacities people normally ascribe to other people – thus, dehumanization is a form of dementalizing. Within this framework, Haslam reviews the empirical literature on what forms dehumanization takes and what its possible functions are. He also considers a number of critiques and debates over these findings that have recently surfaced.

Bertram F. Malle (Chapter 15) compares the two major moral sanctioning behaviors of blame and punishment from two perspectives: their cultural history and their underlying psychology. He draws a dividing line between two phases of human evolution – before and after human settlement – and proposes that, before that watershed, moral sanctions were informal, nonhierarchical, and often mild, akin to today’s acts of moral blame among intimates. Soon after settlement, hierarchies emerged, in which punishment took hold as a new form of sanctioning, typically exacted by those higher up in the hierarchy, eventually by institutions of punishment. Malle reviews the empirical evidence on the cognitive and social processes underlying each of these sanctioning tools and proposes that their distinct cultural histories are reflected in their psychological properties we can observe today. Whereas blame is, on the whole, flexible, effective, and cognitively sophisticated, punishment is often more damaging, less effective, and can easily be abused – as in past and modern forms of institutional punishment. Compare this chapter to Janice Nadler’s (Chapter 21, this volume) treatment of similarities and differences between blame in ordinary life and blame within the US legal system.

Friederike Funk and Victoria McGeer (Chapter 16) close this part of the book with a discussion of moral communication, in which the topic of punishment is also center stage. Their approach to the topic is somewhat unorthodox, insofar as the term moral communication is typically used to refer to a class of behaviors distinct from moral sanctions (see earlier discussion in this chapter of the landscape of morality, depicted in Figure 1.1). The authors argue that moral norms are distinctive in that their transgression tends to provoke a desire in members of the community to punish the transgressor, and that such punishment has a communicative function. Indeed, on their view, punishment is best understood as a nonlinguistic form of moral communication, one that expresses sharp disapproval of the transgressor’s actions and attitudes. This approach to punishment has the potential to resolve conflicting results from studies of the effect of group membership on punishment, such as the fact that in-group transgressors are sometimes treated more leniently than out-group transgressors (“in-group favoritism”) and sometimes more harshly (the “black sheep effect”). The solution to the puzzle, the authors argue, is that severity of punishment depends on who the intended target of communication is and what message the punishment is intended to convey.

1.3.4 Part IV: Origins, Development, and Variation

Part IV addresses questions of variability – from the evolutionary origins of morality to its development in the earliest phases of life, all the way to cultural variability.

Darcia Narvaez (Chapter 17) discusses morality from an evolutionary-developmental, cultural, and (to a lesser extent) neurobiological perspective. The framework for her discussion is triune ethics metatheory, a main tenet of which is that healthy moral development requires the provision by the community of an “evolved nest” in which caregivers treat children with love and respect. Failure to receive this support can limit a person’s social and emotional competence necessary for species-typical moral functioning. The natural trajectory of moral development, Narvaez suggests, tends toward an engagement-centered ethic oriented around the virtues of cooperation, compassion, and egalitarianism – the ethic characteristic of Indigenous cultures (and of our hunter-gatherer ancestors, as Malle’s chapter [Chapter 15] suggests). This path of development is readily disrupted by practices of child-rearing in Western industrialized societies, which deprive children of the social and emotional resources needed for healthy moral development, thereby promoting the development of a self-protection-centered ethic oriented around competition, cold-heartedness, and dominance. Thus, understanding the role of the evolved nest in scaffolding moral development is key to understanding why antisocial behavior is so pervasive in modern Western culture – and to designing interventions that might help to reduce it.

Kiley Hamlin and Francis Yuen (Chapter 18) present a large body of evidence suggesting that, within the first year of life, infants hold both expectations about and preferences for morally good versus bad protagonists. Across different methods, the authors show that infants distinguish between morally significant acts of helping and hindering as well as between acting fairly and unfairly; they prefer the morally good actions and the morally good protagonists; and they expect others to prefer the morally good protagonists as well. Going beyond a mere valence difference, these expectations vary systematically in response to critical factors, such as victim’s state of need, in-group/out-group membership, and a character’s intentions. Many of the findings appear in infants 8–12 months of age, some as early as 3 months of age. Questions remain, such as how consistent the findings are across experimenters and populations; whether the violated norm is truly moral or only a social expectation; and to what extent earliest learning guides these expectations and preferences. But overall, the evidence for budding moral distinctions in early infancy is highly compelling and provocative.

Abigail A. Baird and Margaret M. Matthews (Chapter 19) take up the issue of moral development in adolescence, focusing on the role of individual differences in shaping the emergence of a mature moral sense. Their wide-ranging discussion touches on how differences in temperament, gender, familial and peer relationships, and lived experience influence the timing and outcome of adolescent moral development. Illustrating the role of temperament, for example, high-reactive individuals may be more prone to impulsive behavior that violates moral norms, whereas low-reactive individuals may be more likely conform to moral norms because they are more sensitive to the threat of punishment. Showing the importance of interpersonal relationships, weak attachment to caregivers in adolescence is associated with impairments of empathy and a greater propensity for antisocial and immoral behavior (a major theme in Narvaez’s chapter [Chapter 17]). Peer influence is another key predictor of both antisocial and prosocial behavior in adolescence. Further, moral development in adolescence critically depends on the maturation of capacities for empathy and self-conscious emotion (e.g., guilt, embarrassment, pride), a process that is shaped by the individual’s lived experience. In closing, the authors suggest that the powerful effects of individual differences on adolescent moral development are best accounted for by models that explain the maturation of the moral sense at multiple levels of analysis and timescales.

Richard A. Shweder, Jacob R. Hickman, and Les Beldo (Chapter 20) ask how one can scientifically examine the moralities of different human groups without falling into ethnocentrism – without morally judging the practices of other groups as wrong or unacceptably different from one’s own. The authors propose to accept (at least as a methodological orientation) “moral realism” – the view that all human communities share a small set of “moral truths.” These truths are abstract and must be expressed in culturally and historically specific ways to be workable, and their differentiated expressions across different groups can make them seem irreconcilable. But by identifying moral absolutes, the authors suggest, scientists can make sense of the great variety of cultures and moralities and still recognize their commonalities. To illustrate their points, they discuss examples of clashing moral practices, such as between Brahman Indian and Western views of a widow’s obligations, and between Native American whalers’ and whaling protesters’ attitudes toward whaling. Each of these groups sees their own moral position as “objective” (independent of social consensus) and “absolute” (true without need for justification), but underlying their seeming differences, the authors argue, there really might be shared moral truths.

It is worth pointing out that Baron (Chapter 8, this volume), too, suggests that people may hold some absolute moral principles (protected or sacred values). But whereas the moral realist featured by Shweder and colleagues suggests that denying these intuitively and instantly grasped truths is a sign of irrationality, the utilitarian featured by Baron suggests that holding onto such truths can lead to irrationality.

1.3.5 Part V: Applications and Extensions

Part V applies some of the core concepts and theories to the domains of law, politics, and religion and closes with a discussion of how empirical work in moral psychology bears on issues in moral philosophy.

Janice Nadler (Chapter 21) examines the sanctioning doctrines within Anglo-American criminal law and explores similarities and differences between criminal blame and ordinary social blame. Nadler takes on topics of intended but incomplete transgressive conduct, the distinction between intended and unintended outcomes, as well as questions of recklessness and the role of a transgressor’s character in ordinary and legal blame. Nadler shows the complexity of the legal blame process and its many parallels in ordinary blame. On the legal side, she considers both the codified principles of US criminal law and the unwritten body of less precise standards and practices that can deviate from the codified ideals. On the ordinary side of blame, Nadler highlights the importance of both causal and mental factors that people take into account for intentional and unintentional transgressions. Nadler concludes that there is a great deal of congruity between legal and ordinary blame, especially in concepts and evidence considerations, but somewhat different goals and certainly more severe outcomes on the legal side (especially when errors or biases take hold). Compare this chapter to Malle’s (Chapter 15, this volume) analysis of the cultural history, social regulation, and psychological processes underlying blame and punishment.

Kate W. Guan, Gordon Heltzel, and Kristin Laurin (Chapter 22) discuss the moral dimensions of political attitudes and behavior. They argue that a person’s political views – both at the level of political ideology as a whole and views on specific matters of economic and social policy – are profoundly shaped by their beliefs about right and wrong. These political views in turn drive people’s political behavior, not just at the ballot box or on the campaign trail, but in the community more generally. One downside of the way in which moral convictions fuel political attitudes and behavior is that they tend to interfere with productive communication across partisan divides, fueling a kind of animosity that stifles cooperation and compromise. Divergence in people’s moral convictions, then, leads inexorably to political polarization and gridlock. To address this problem, the authors discuss a number of potentially promising interventions, some of which target individuals’ attitudes (e.g., promoting empathy, reducing negative stereotypes), and others that aim at improving the quality of interpersonal relationships (e.g., increasing contact, fostering dialogue across political divides).

Benjamin Grant Purzycki and Theiss Bendixen (Chapter 23) discuss the complex, multifaceted connection between morality and religion from an evolutionary perspective. After providing some much-needed conceptual ground clearing, the authors focus on accounts of the linkage between morality and religion in terms of evolved psychological mechanisms that promote cooperation and inhibit competition. One of the better known of these accounts is the supernatural punishment hypothesis. On this view, the morality–religion link is sustained by the fact that belief in an all-knowing, all-powerful god who monitors people’s behavior and punishes their moral transgressions motivates people to behave less selfishly and more cooperatively. An alternative account is that participation in religious ritual is a form of costly signaling, indicating to others that the participant can be trusted to observe the moral norms of the community, including norms of cooperation. As a result, ritual activity comes to be associated with increased cooperation and decreased competition, at least within religious groups. While there is considerable support for the idea that religion can function as a recipe for kindness and a remedy for selfishness, however, the authors caution that the psychological mechanisms underlying this function are not yet well understood.

Paul Rehren and Walter Sinnott-Armstrong (Chapter 24) suggest some lessons from moral psychology for ethics and metaethics. They note that empirical research on a wide range of topics, including moral character, happiness and well-being, free will and moral responsibility, and moral judgment, has had a profound influence on recent philosophical theorizing about the foundations of morality. In their chapter they focus on one issue of particular importance: the reliability and trustworthiness of moral judgment. They critically assess multiple lines of argument that threaten to undermine epistemic confidence in our moral judgments, including evolutionary debunking arguments, process arguments, arguments from disagreement, and arguments from irrelevant influences. Though the jury is still out on how successful these arguments are, there is little question that they have potentially profound implications both for moral epistemology (insofar as they pose a threat to moral intuitionism) and philosophical methodology (insofar as they cast doubt on the thought-experimental method). Perhaps the most important lesson for ethics and metaethics to be drawn from moral psychology, then, may be that future progress in moral philosophy is likely to depend on philosophers and psychologists working together, rather than in isolation from one another.

References

Alicke, M. D. (2000). Culpable control and the psychology of blame. Psychological Bulletin, 126(4), 556574.CrossRefGoogle ScholarPubMed
Anderson, R. A., Crockett, M. J., & Pizarro, D. A. (2020). A theory of moral praise. Trends in Cognitive Sciences, 24(9), 694703.CrossRefGoogle ScholarPubMed
Andrighetto, G., Brandts, J., Conte, R., Sabater-Mir, J., Solaz, H., & Villatoro, D. (2013). Punish and voice: Punishment enhances cooperation when combined with norm-signalling. PLoS ONE, 8(6), Article e64941.CrossRefGoogle ScholarPubMed
Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2022). A systematic literature review of user trust in AI-enabled systems: An HCI perspective. International Journal of Human–Computer Interaction, 40(3), 116.Google Scholar
Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and Social Psychology Review, 3(3), 193209.CrossRefGoogle ScholarPubMed
Bartels, D. M., Bauman, C. W., Cushman, F. A., Pizarro, D. A., & McGraw, A. P. (2015). Moral judgment and decision making. In Keren, G. & Wu, G. (Eds.), The Wiley Blackwell handbook of judgment and decision making (pp. 478515). John Wiley & Sons, Ltd.CrossRefGoogle Scholar
Bartling, B., & Özdemir, Y. (2023). The limits to moral erosion in markets: Social norms and the replacement excuse. Games and Economic Behavior, 138, 143160.CrossRefGoogle Scholar
Baumeister, R. F., & Vohs, K. D. (2003). Self-regulation and the executive function of the self. In Leary, M. R. & Tangney, J. P. (Eds.), Handbook of self and identity (pp. 197217). Guilford Press.Google Scholar
Bello, P., & Malle, B. F. (2023). Computational approaches to morality. In Sun, R. (Ed.), Cambridge handbook of computational cognitive sciences (pp. 10371063). Cambridge University Press.CrossRefGoogle Scholar
Bicchieri, C. (2006). The grammar of society: The nature and dynamics of social norms. Cambridge University Press.Google Scholar
Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365368.CrossRefGoogle ScholarPubMed
Blomberg, O., & Petersson, B. (2024). Team reasoning and collective moral obligation. Social Theory and Practice 50(3), 483516.CrossRefGoogle Scholar
Boada, J. P., Maestre, B. R., & Genís, C. T. (2021). The ethical issues of social assistive robotics: A critical literature review. Technology in Society, 67, Article 101726.CrossRefGoogle Scholar
Boehm, C. (2018). Collective intentionality: A basic and early component of moral evolution. Philosophical Psychology, 31(5), 680702.CrossRefGoogle Scholar
Bonnefon, J.-F., Rahwan, I., & Shariff, A. (2024). The moral psychology of artificial intelligence. Annual Review of Psychology, 75(1), 653675.CrossRefGoogle ScholarPubMed
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 15731576.CrossRefGoogle ScholarPubMed
Bostyn, D. H., & Roets, A. (2016). The morality of action: The asymmetry between judgments of praise and blame in the action–omission effect. Journal of Experimental Social Psychology, 63, 1925.CrossRefGoogle Scholar
Cameron, C. D., Payne, B. K., Sinnott-Armstrong, W., Scheffer, J. A., & Inzlicht, M. (2017). Implicit moral evaluations: A multinomial modeling approach. Cognition, 158, 224241.CrossRefGoogle ScholarPubMed
Castro-Toledo, F. J., Cerezo, P., & Gómez-Bellvís, A. B. (2023). Scratching the structure of moral agency: Insights from philosophy applied to neuroscience. Frontiers in Neuroscience, 17 Article 1198001.CrossRefGoogle ScholarPubMed
Cervantes, J.-A., López, S., Rodríguez, L.-F., Cervantes, S., Cervantes, F., & Ramos, F. (2020). Artificial moral agents: A survey of the current status. Science and Engineering Ethics, 26(2), 501532.CrossRefGoogle ScholarPubMed
Christensen, J. F., & Gomila, A. (2012). Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review. Neuroscience & Biobehavioral Reviews, 36(4), 12491264.CrossRefGoogle ScholarPubMed
Christner, N., Pletti, C., & Paulus, M. (2020). Emotion understanding and the moral self-concept as motivators of prosocial behavior in middle childhood. Cognitive Development, 55 Article 100893.CrossRefGoogle Scholar
Coates, D. J., & Tognazzini, N. A. (Eds.). (2012). Blame: Its nature and norms. Oxford University Press.CrossRefGoogle Scholar
Conte, R., Andrighetto, G., & Campenni, M. (2013). Minding norms: Mechanisms and dynamics of social order in agent societies. Oxford University Press.CrossRefGoogle Scholar
Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, 108(2), 353380.CrossRefGoogle ScholarPubMed
Cushman, F. (2013). The role of learning in punishment, prosociality, and human uniqueness. In Sterelny, K., Joyce, R., Calcott, B., & Fraser, B. (Eds.), Cooperation and its evolution. (pp. 333372). MIT Press.Google Scholar
Cushman, F., Kumar, V., & Railton, P. (Eds.). (2017). Moral learning [Special issue]. Cognition, 167, 1282.CrossRefGoogle ScholarPubMed
Cusimano, C., Thapa, S., & Malle, B. F. (2017). Judgment before emotion: People access moral evaluations faster than affective states. In Gunzelmann, G., Howes, A., Tenbrink, T., & Davelaar, E. J. (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (pp. 18481853). Cognitive Science Society.Google Scholar
de Waal, F. B. M. (2014). Natural normativity: The “is” and “ought” of animal behavior. Behaviour, 151(2–3), 185204.CrossRefGoogle Scholar
Dhillon, A., & Nicolò, A. (2023). Moral costs of corruption: A review of the literature. In Basu, K. & Mishra, A. (Eds.), Law and economic development: Behavioral and moral foundations of a changing world (pp. 93129). Springer International Publishing.CrossRefGoogle Scholar
Drew, P. (1998). Complaints about transgressions and misconduct. Research on Language & Social Interaction, 31(3–4), 295325.CrossRefGoogle Scholar
Ellemers, N., Pagliaro, S., & Nunspeet, F. van (Eds.). (2023). The Routledge international handbook of the psychology of morality. Taylor & Francis.CrossRefGoogle Scholar
Enke, B. (2023). Market exposure and human morality. Nature Human Behaviour, 7(1), 134141.CrossRefGoogle ScholarPubMed
Fehr, E., & Fischbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25(2), 6387.CrossRefGoogle Scholar
Fehr, E., & Gächter, S. (2000). Cooperation and punishment in public goods experiments. American Economic Review, 90(4), 980994.CrossRefGoogle Scholar
Fike, R. (2023). Do disruptions to the market process corrupt our morals? Review of Austrian Economics, 36(1), 99106.CrossRefGoogle Scholar
Fornari, L., Ioumpa, K., Nostro, A. D., Evans, N. J., De Angelis, L., Speer, S. P. H., Paracampo, R., Gallo, S., Spezio, M., Keysers, C., & Gazzola, V. (2023). Neuro-computational mechanisms and individual biases in action-outcome learning under moral conflict. Nature Communications, 14(1), Article 1.CrossRefGoogle ScholarPubMed
Friedman, M. (2013). How to blame people responsibly. Journal of Value Inquiry, 47(3), 271284.CrossRefGoogle Scholar
Galliott, J., Macintosh, D., & Ohlin, J. D. (Eds.). (2021). Lethal autonomous weapons: Re-examining the law and ethics of robotic warfare. Oxford University Press.CrossRefGoogle Scholar
Genschow, O., Cracco, E., Schneider, J., Protzko, J., Wisniewski, D., Brass, M., & Schooler, J. W. (2023). Manipulating belief in free will and its downstream consequences: A meta-analysis. Personality and Social Psychology Review, 27(1), 5282.CrossRefGoogle ScholarPubMed
Gobodo-Madikizela, P. (2002). Remorse, forgiveness, and rehumanization: Stories from South Africa. Journal of Humanistic Psychology, 42(1), 732.CrossRefGoogle Scholar
Gollan, T., & Witte, E. H. (2008). “It was right to do it, because …” Social Psychology, 39(3), 189196.CrossRefGoogle Scholar
Gomez-Lavin, J., & Prinz, J. (2019). Parole and the moral self: Moral change mitigates responsibility. Journal of Moral Education, 48(1), 6583.CrossRefGoogle Scholar
Goodwin, G. P., & Gromet, D. M. (2014). Punishment. Cognitive Science, 5(5), 561572.Google ScholarPubMed
Grappi, S., Romani, S., & Bagozzi, R. P. (2013). Consumer response to corporate irresponsible behavior: Moral emotions and virtues. Journal of Business Research, 66(10), 18141821.CrossRefGoogle Scholar
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619.CrossRefGoogle ScholarPubMed
Greene, J. D. (2008). The secret joke of Kant’s soul. In Sinnott-Armstrong, W. (Ed.), Moral psychology, Vol 3: The neuroscience of morality: Emotion, brain disorders, and development (pp. 3580). MIT Press.Google Scholar
Guglielmo, S. (2015). Moral judgment as information processing: An integrative review. Frontiers in Psychology, 6, Article 1637.CrossRefGoogle ScholarPubMed
Guglielmo, S., & Malle, B. F. (2017). Information-acquisition processes in moral judgments of blame. Personality and Social Psychology Bulletin, 43(7), 957971.CrossRefGoogle ScholarPubMed
Guglielmo, S., & Malle, B. F. (2019). Asymmetric morality: Blame is more differentiated and more extreme than praise. PLoS ONE, 14(3), Article e0213544.CrossRefGoogle ScholarPubMed
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814834.CrossRefGoogle ScholarPubMed
Haidt, J. (2003). The moral emotions. In Davidson, R. J., Scherer, K. R., & Goldsmith, H. H. (Eds.), Handbook of affective sciences (pp. 852870). Oxford University Press.Google Scholar
Harrer, S. (2023). Attention is not all you need: The complicated case of ethically using large language models in healthcare and medicine. eBioMedicine, 90, Article 104512.CrossRefGoogle Scholar
Hindriks, F. (2008). Intentional action and the praise-blame asymmetry. Philosophical Quarterly, 58(233), 630641.CrossRefGoogle Scholar
Hitlin, S. (2011). Values, personal identity, and the moral self. In Schwartz, S. J., Luyckx, K., & Vignoles, V. L. (Eds.), Handbook of identity theory and research (pp. 515529). Springer.Google Scholar
Huebner, B., Dwyer, S., & Hauser, M. (2009). The role of emotion in moral psychology. Trends in Cognitive Sciences, 13(1), 16.CrossRefGoogle ScholarPubMed
Kant, I. (1998). Groundwork of the metaphysics of morals (Gregor, M. J., Ed. & Trans.). Cambridge University Press. (Original work published 1785)CrossRefGoogle Scholar
Kohlberg, L. (1981). Essays on moral development. Harper & Row.Google Scholar
Krueger, F., Hoffman, M., Walter, H., & Grafman, J. (2013). An fMRI investigation of the effects of belief in free will on third-party punishment. Social Cognitive and Affective Neuroscience, 9(8), 11431149.CrossRefGoogle ScholarPubMed
Kupfer, T. R., & Giner-Sorolla, R. (2017). Communicating moral motives: The social signaling function of disgust. Social Psychological and Personality Science, 8(6), 632640.CrossRefGoogle Scholar
Ladak, A., Loughnan, S., & Wilks, M. (2023). The moral psychology of artificial intelligence. Current Directions in Psychological Science, 33(1), 2734.CrossRefGoogle Scholar
Laurent, S. M., Nuñez, N. L., & Schweitzer, K. A. (2016). Unintended, but still blameworthy: The roles of awareness, desire, and anger in negligence, restitution, and punishment. Cognition & Emotion, 30(7), 12711288.CrossRefGoogle ScholarPubMed
Leary, M. R., & Tangney, J. P. (Eds.). (2012). Handbook of self and identity (2nd ed.). Guilford Press.Google Scholar
Leary, M. R., Tangney, J. P., & Leary, M. (Eds.). (2003). Handbook of self and identity (Paperback ed). Guilford Press.Google Scholar
Lengersdorff, L. L., Wagner, I. C., Lockwood, P. L., & Lamm, C. (2020). When implicit prosociality trumps selfishness: The neural valuation system underpins more optimal choices when learning to avoid harm to others than to oneself. Journal of Neuroscience, 40(38), 72867299.CrossRefGoogle ScholarPubMed
Lin, P., Abney, K., & Bekey, G. A. (Eds.). (2011). Robot ethics: The ethical and social implications of robotics. MIT Press.Google Scholar
Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology, 18(4), 243256.CrossRefGoogle Scholar
Malle, B. F. (2021). Moral judgments. Annual Review of Psychology, 72, 293318.CrossRefGoogle ScholarPubMed
Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25(2), 147186.CrossRefGoogle Scholar
Malle, B. F., Guglielmo, S., Voiklis, J., & Monroe, A. E. (2022). Cognitive blame is socially shaped. Current Directions in Psychological Science, 31(2), 169176.CrossRefGoogle Scholar
Malle, B. F., & Scheutz, M. (2019). Learning how to behave: Moral competence for social robots. In Bendel, O. (Ed.), Handbuch Maschinenethik [Handbook of machine ethics]. Springer.Google Scholar
Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI’15 (pp. 117124). ACM.Google Scholar
Malle, B. F., & Ullman, D. (2021). A multidimensional conception and measure of human-robot trust. In Nam, C. S. & Lyons, J. B. (Eds.), Trust in human-robot interaction (pp. 325). Academic Press.CrossRefGoogle Scholar
Marazziti, D., Baroni, S., Landi, P., Ceresoli, D., & Dell’Osso, L. (2013). The neurobiology of moral sense: Facts or hypotheses? Annals of General Psychiatry, 12(1), Article 6.CrossRefGoogle ScholarPubMed
Martin, J. W., & Cushman, F. (2016). Why we forgive what can’t be controlled. Cognition, 147, 133143.CrossRefGoogle ScholarPubMed
May, J. (2018). Regard for reason in the moral mind. Oxford University Press.CrossRefGoogle Scholar
May, J. (2023). Moral rationalism on the brain. Mind & Language, 38(1), 237255.CrossRefGoogle Scholar
Merritt, A. C., Effron, D. A., & Monin, B. (2010). Moral self-licensing: When being good frees us to be bad. Social and Personality Psychology Compass, 4(5), 344357.CrossRefGoogle Scholar
Metzinger, T. (2004). Being no one: The self-model theory of subjectivity. MIT Press.Google Scholar
Mikhail, J. (2008). Moral cognition and computational theory. In Sinnott-Armstrong, W. (Ed.), Moral psychology: Vol. 3. The neuroscience of morality (pp. 8192). MIT Press.Google Scholar
Mill, J. S. (1998). Utilitarianism. Oxford University Press.Google Scholar
Misselhorn, C. (2018). Artificial morality: Concepts, issues and challenges. Society, 55(2), 161169.CrossRefGoogle Scholar
Moisuc, A., & Brauer, M. (2019). Social norms are enforced by friends: The effect of relationship closeness on bystanders’ tendency to confront perpetrators of uncivil, immoral, and discriminatory behaviors. European Journal of Social Psychology, 49(4), 824830.CrossRefGoogle Scholar
Molho, C., Tybur, J. M., Van Lange, P. A. M., & Balliet, D. (2020). Direct and indirect punishment of norm violations in daily life. Nature Communications, 11, Article 34.CrossRefGoogle ScholarPubMed
Monin, B., Pizarro, D. A., & Beer, J. S. (2007). Deciding versus reacting: Conceptions of moral judgment and the reason-affect debate. Review of General Psychology, 11(2), 99111.CrossRefGoogle Scholar
Monroe, A. E., Brady, G. L., & Malle, B. F. (2017). This isn’t the free will worth looking for: General free will beliefs do not influence moral judgments, agent-specific choice ascriptions do. Social Psychological and Personality Science, 8(2), 191199.CrossRefGoogle Scholar
Monroe, A. E., Dillon, K. D., Guglielmo, S., & Baumeister, R. F. (2018). It’s not what you do, but what everyone else does: On the role of descriptive norms and subjectivism in moral judgment. Journal of Experimental Social Psychology, 77, 110.CrossRefGoogle Scholar
Monroe, A. E., Dillon, K. D., & Malle, B. F. (2014). Bringing free will down to Earth: People’s psychological concept of free will and its role in moral judgment. Consciousness and Cognition, 27, 100108.CrossRefGoogle ScholarPubMed
Monroe, A. E., & Malle, B. F. (2010). From uncaused will to conscious choice: The need to study, not speculate about people’s folk concept of free will. Review of Philosophy and Psychology, 1(2), 211224.CrossRefGoogle Scholar
Monroe, A. E., & Malle, B. F. (2019). People systematically update moral judgments of blame. Journal of Personality and Social Psychology, 116(2), 215236.CrossRefGoogle ScholarPubMed
Murray, S., O’Neill, K., Bridges, J., Sytsma, J., & Irving, Z. (2024). Blame for Hum(e)an beings: The role of character information in judgments of blame. Social Psychological and Personality Science. https://doi.org/10.1177/19485506241233708CrossRefGoogle Scholar
Nahmias, E., Coates, D. J., & Kvaran, T. (2007). Free will, moral responsibility, and mechanism: Experiments on folk intuitions. Midwest Studies in Philosophy, 31(1), 214242.CrossRefGoogle Scholar
Nahmias, E., Morris, S., Nadelhoffer, T., & Turner, J. (2005). Surveying freedom: Folk intuitions about free will and moral responsibility. Philosophical Psychology, 18(5), 561584.CrossRefGoogle Scholar
Nahmias, E., Shepard, J., & Reuter, S. (2014). It’s OK if ‘my brain made me do it’: People’s intuitions about free will and neuroscientific prediction. Cognition, 133(2), 502516.CrossRefGoogle Scholar
Newman, G. E., De Freitas, J., & Knobe, J. (2015). Beliefs about the true self explain asymmetries based on moral judgment. Cognitive Science, 39(1), 96125.CrossRefGoogle ScholarPubMed
Nichols, S. (2011). Experimental philosophy and the problem of free will. Science, 331(6023), 14011403.CrossRefGoogle ScholarPubMed
Nichols, S., & Knobe, J. (2007). Moral responsibility and determinism: The cognitive science of folk intuitions. Noûs, 41(4), 663685.CrossRefGoogle Scholar
Olson, E. T. (1999). The human animal: Personal identity without psychology. Oxford University Press.CrossRefGoogle Scholar
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), Article aac4716.CrossRefGoogle Scholar
Panagopoulou-Koutnatzi, F. (2014). The practice of naming and shaming through the publicizing of “culprit” lists. In Akrivopoulou, C. M. & Garipidis, N. (Eds.), Human rights and the impact of ICT in the public sphere: Participation, democracy, and political autonomy (pp. 145155). Information Science Reference/IGI Global.CrossRefGoogle Scholar
Parfit, D. (1992). Reasons and persons. Clarendon Press.Google Scholar
Pereira, L. M., & Lopes, A. B. (2020). Machine ethics: From machine morals to the machinery of morality. Springer Nature.CrossRefGoogle Scholar
Pizarro, D. A., Uhlmann, E., & Salovey, P. (2003). Asymmetry in judgments of moral blame and praise. Psychological Science, 14(3), 267272.CrossRefGoogle ScholarPubMed
Prinz, J. (2006). The emotional basis of moral judgments. Philosophical Explorations, 9(1), 2943.CrossRefGoogle Scholar
Prinz, J., & Nichols, S. (2010). Moral emotions. In Doris, J. M. (Ed.), The moral psychology handbook (pp. 111146). Oxford University Press.CrossRefGoogle Scholar
Przepiorka, W., & Berger, J. (2016). The sanctioning dilemma: A quasi-experiment on social norm enforcement in the train. European Sociological Review, 32(3), 439451.CrossRefGoogle Scholar
Riordan, C. A., Marlin, N. A., & Kellogg, R. T. (1983). The effectiveness of accounts following transgression. Social Psychology Quarterly, 46(3), 213219.CrossRefGoogle Scholar
Robbins, P., & Alvear, F. (2023). Deformative experience: Explaining the effects of adversity on moral evaluation. Social Cognition, 41(5), 415446.CrossRefGoogle Scholar
Royzman, E. B., Goodwin, G. P., & Leeman, R. F. (2011). When sentimental rules collide: “Norms with feelings” in the dilemmatic context. Cognition, 121(1), 101114.Google ScholarPubMed
Royzman, E. B., Leeman, R. F., & Baron, J. (2009). Unsentimental ethics: Towards a content-specific account of the moral-conventional distinction. Cognition, 112(1), 159174.CrossRefGoogle ScholarPubMed
Salem, M., Lakatos, G., Amirabdollahian, F., & Dautenhahn, . (2015). Towards safe and trustworthy social robots: Ethical challenges and practical issues. In Tapus, A., André, E., Martin, J.-C., Ferland, F., & Ammi, M. (Eds.), Social Robotics: 7th International Conference, ICSR 2015, Proceedings (pp. 584593). Springer International Publishing.CrossRefGoogle Scholar
Sattler, S., Dubljević, V., & Racine, E. (2023). Cooperative behavior in the workplace: Empirical evidence from the agent-deed-consequences model of moral judgment. Frontiers in Psychology, 13, Article 1064442.CrossRefGoogle ScholarPubMed
Sauer, H. (2011). Social intuitionism and the psychology of moral reasoning. Philosophy Compass, 6(10), 708721.CrossRefGoogle Scholar
Sauer, H. (2012). Morally irrelevant factors: What’s left of the dual process-model of moral cognition? Philosophical Psychology, 25(6), 783811.CrossRefGoogle Scholar
Scherer, K. R. (2009). The dynamic architecture of emotion: Evidence for the component process model. Cognition and Emotion, 23(7), 13071351.CrossRefGoogle Scholar
Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. In Zanna, M. P. (Ed.), Advances in experimental social psychology (Vol. 25, pp. 165). Academic Press.Google Scholar
Semin, G. R., & Manstead, A. S. R. (1983). The accountability of conduct: A social psychological analysis. Academic Press.Google Scholar
Shank, D. B., & DeSanti, A. (2018). Attributions of morality and mind to artificial intelligence after real-world moral violations. Computers in Human Behavior, 86, 401411.CrossRefGoogle Scholar
Shank, D. B., Kashima, Y., Peters, K., Li, Y., Robins, G., & Kirley, M. (2019). Norm talk and human cooperation: Can we talk ourselves into cooperation? Journal of Personality and Social Psychology, 117(1), 99123.CrossRefGoogle ScholarPubMed
Sharkey, A., & Sharkey, N. (2010). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology, 14(1), 2740.CrossRefGoogle Scholar
Silver, E., & Silver, J. R. (2021). Morality and self-control: The role of binding and individualizing moral motives. Deviant Behavior, 42(3), 366385.CrossRefGoogle Scholar
Sio, F. S. de, & Wynsberghe, A. van. (2016). When should we use care robots? The nature-of-activities approach. Science and Engineering Ethics, 22, 17451760.Google Scholar
Sommers, T. (2010). Experimental philosophy and free will. Philosophy Compass, 5(2), 199212.CrossRefGoogle Scholar
Sorial, S. (2016). Performing anger to signal injustice: The expression of anger in victim impact statements. In Abell, C., Smith, J., Abell, C., & Smith, J. (Eds.), The expression of emotion: Philosophical, psychological and legal perspectives (pp. 287310). Cambridge University Press.CrossRefGoogle Scholar
Steele, C. M. (1988). The psychology of self-affirmation: Sustaining the integrity of the self. In Berkowitz, L. (Ed.), Advances in experimental social psychology (Vol. 21, pp. 261302). Academic Press.Google Scholar
Strawson, P. F. (1962). Freedom and resentment. Proceedings of the British Academy, 48, 125.Google Scholar
Strohminger, N., Knobe, J., & Newman, G. (2017). The true self: A psychological concept distinct from the self. Perspectives on Psychological Science, 12(4), 551560.CrossRefGoogle Scholar
Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition, 131(1), 159171.CrossRefGoogle ScholarPubMed
Stuart, M. T., & Kneer, M. (2021). Guilty artificial minds: Folk attributions of mens rea and culpability to artificially intelligent agents. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), Article 363, 127.CrossRefGoogle Scholar
Suls, J. (Ed.). (2014). Psychological perspectives on the self: Vol. 4. The self in social perspective. Psychology Press.CrossRefGoogle Scholar
Sztompka, P. (2019). Trust in the moral space. In Sasaki, M. (Ed.), Trust in contemporary society (Vol. 42, pp. 3140). Koninklijke Brill NV.CrossRefGoogle Scholar
Tangney, J. P., Stuewig, J., & Mashek, D. J. (2007). Moral emotions and moral behavior. Annual Review of Psychology, 58, 345372.CrossRefGoogle ScholarPubMed
Tedeschi, J. T., & Reiss, M. (1981). Verbal strategies as impression management. In Antaki, C. (Ed.), The psychology of ordinary social behaviour (pp. 271309). Academic Press.Google Scholar
Tesser, A. (1988). Towards a self-evaluation maintenance model of social behavior. In Berkowitz, L. (Ed.), Advances in experimental social psychology (Vol. 21, pp. 181227). Academic Press.Google Scholar
Thomson, J. J. (1985). The trolley problem. Yale Law Journal, 94, 13951415.CrossRefGoogle Scholar
Tiberius, V. (2015). Moral psychology: A contemporary introduction. Routledge/ Taylor & Francis Group.Google Scholar
Tolmeijer, S., Kneer, M., Sarasua, C., Christen, M., & Bernstein, A. (2021). Implementations in machine ethics: A survey. ACM Computing Surveys, 53(6), Article 132, 138.CrossRefGoogle Scholar
Tomasello, M. (2016). A natural history of human morality. Harvard University Press.CrossRefGoogle Scholar
Vila‐Henninger, L. A. (2021). A dual‐process model of economic behavior: Using culture and cognition, economic sociology, behavioral economics, and neuroscience to reconcile moral and self‐interested economic action. Sociological Forum, 36(Suppl. 1), 12711296.CrossRefGoogle Scholar
Vohs, K. D., & Schooler, J. W. (2008). The value of believing in free will: Encouraging a belief in determinism increases cheating. Psychological Science, 19(1), 4954.CrossRefGoogle ScholarPubMed
Vonasch, A. J., Baumeister, R. F., & Mele, A. R. (2018). Ordinary people think free will is a lack of constraint, not the presence of a soul. Consciousness and Cognition, 60, 133151.CrossRefGoogle Scholar
Wilson, J. Q. (1993). The moral sense. American Political Science Review, 87(1), 111.CrossRefGoogle Scholar
Woolfolk, R. L., Doris, J. M., & Darley, J. M. (2006). Identification, situational constraint, and social cognition: Studies in the attribution of moral responsibility. Cognition, 100(2), 283301.CrossRefGoogle ScholarPubMed
Wylie, R. C. (1979). The self-concept (Vol. 2). University of Nebraska Press.Google Scholar
Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1), 90112.CrossRefGoogle Scholar
Yang, Q., Yan, L., Luo, J., Li, A., Zhang, Y., Tian, X., & Zhang, D. (2013). Temporal dynamics of disgust and morality: An event-related potential study. PLoS ONE, 8(5), Article e65094.CrossRefGoogle ScholarPubMed
Yoder, K. J., & Decety, J. (2014). Spatiotemporal neural dynamics of moral judgment: A high-density ERP study. Neuropsychologia, 60, 3945.CrossRefGoogle ScholarPubMed
Young, L., Chakroff, A., & Tom, J. (2012). Doing good leads to more good: The reinforcing power of a moral self-concept. Review of Philosophy and Psychology, 3(3), 325334.CrossRefGoogle Scholar
Yucel, M., & Vaish, A. (2021). Eliciting forgiveness. WIREs Cognitive Science, 12(6), Article e1572.CrossRefGoogle ScholarPubMed
Zakharin, M., Curry, O. S., Lewis, G., & Bates, T. C. (2024). Modular morals: The genetic architecture of morality as cooperation. PsyArXiv. https://doi.org/10.31234/osf.io/srjyqCrossRefGoogle Scholar
Zeelenberg, M., Breugelmans, S. M., & de Hooge, I. E. (2012). Moral sentiments: A behavioral economics approach. In Innocenti, A. & Sirigu, A. (Eds.), Neuroscience and the economics of decision making (pp. 7385). Routledge/Taylor & Francis Group.Google Scholar
Zhang, Q., Wallbridge, C. D., Jones, D. M., & Morgan, P. (2021). The blame game: Double standards apply to autonomous vehicle accidents. In Stanton, N. (Ed.), Advances in human aspects of transportation (pp. 308314). Springer International Publishing.CrossRefGoogle Scholar
Figure 0

Table 1.1 Journals that publish the largest proportion of research on moral psychology

Figure 1

Figure 1.1 The landscape of morality and its five major territories: moral behavior (including moral decision making), moral judgments (including multiple types, such as evaluation, wrongness, and blame), moral sanctions, moral emotions, and moral communication (expanded from Bello & Malle, 2023, Figure 31.1).

Source: Sun, The Cambridge Handbook of Computational Cognitive Sciences, 2023 ©, published by Cambridge University Press, reproduced with permission.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×