Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-25T19:00:07.705Z Has data issue: false hasContentIssue false

The centrality of reasoning in moral judgments: First- and third-party evaluations of cheating

Published online by Cambridge University Press:  21 November 2024

Tal Waltzer*
Affiliation:
Department of Psychology, University of California, San Diego, USA
Arvid Samuelson
Affiliation:
Department of Psychology, Cornell University, Ithaca, USA
Audun Dahl
Affiliation:
Department of Psychology, Cornell University, Ithaca, USA
*
Corresponding author: Tal Waltzer; Email: twaltzer@ucsd.edu
Rights & Permissions [Opens in a new window]

Abstract

What role does reasoning about moral principles play in people’s judgments about what is right or wrong? According to one view, reasoning usually plays little role. People tend to do what suits their self-interests and concoct moral reasons afterward to justify their own behavior. Thus, in this view, people are far more forgiving of their own violations than of others’ violations. According to a contrasting view, principled reasoning generally guides judgments and decisions about our own and others’ actions. This view predicts that people usually can, and do, articulate the principles that guide their moral judgments and decisions. The present research examined a phenomenon at the center of these debates: students’ evaluations of academic cheating. Across three studies, we used structured interviews and online surveys to examine first- and third-party judgments and reasoning about cheating events. Third-party scenarios were derived from students’ own accounts of cheating events and manipulated based on the reasons students provided. Findings supported the view that reasoning is central to evaluations of cheating. Participants articulated reasons consistent with their judgments about their own and others’ actions. The findings advance classic debates about reasoning in morality and exemplify a paradigm that can bring further advances.

Type
Empirical Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Judgment and Decision Making and European Association for Decision Making

1. Introduction

Society operates on the assumption that moral reasoning is central to how people form judgments and decisions in most situations (Adler & Rips, Reference Adler and Rips2008; Ajzen & Fishbein, Reference Ajzen, Fishbein, Albarracín, Johnson and Zanna2005; Dahl et al., Reference Dahl, Gingo, Uttich and Turiel2018). If people were rarely sensitive to reasons, why provide moral justifications for our stances on abortion or gun legislation? If it turned out that people rarely, if ever, cared about being honest, despite their proclamations to the contrary, our mutual trust would dissolve (Ho, Reference Ho2021; Kolb, Reference Kolb2008; Levine, Reference Levine2014). Yet, the notion that people’s moral principles guide their judgments and decisions faces an obvious challenge: People sometimes seem to act against their avowed moral principles. For example, even though we claim to value honesty and discourage lying, most people lie about once a day (DePaulo et al., Reference DePaulo, Kashy, Kirkendol, Wyer and Epstein1996; Serota et al., Reference Serota, Levine and Docan-Morgan2022; Serota & Levine, Reference Serota and Levine2015).

Seeing this apparent tension between moral principles and actions, some scholars began to doubt whether reasoning about moral principles guides our decisions and actions (Blasi, Reference Blasi1980). To them, principled moral reasoning looked powerless against the forces of self-interest and emotions (Cushman, Reference Cushman2020; Darwall, Reference Darwall2009; Haidt, Reference Haidt2001; Hume, Reference Hume1739).

Cheating has been a test case in debates about whether moral reasoning typically guides human actions (e.g., Blasi, Reference Blasi1980; Hartshorne & May, Reference Hartshorne and May1928; Kohlberg, Reference Kohlberg and Mischel1971). Both universally condemned and highly prevalent, the phenomenon of cheating has prompted many scholars to assert that people readily set aside moral principles to pursue their self-interest. In this view, moral reasoning is peripheral—not central—to how people form judgments and decisions about what to do. As one prominent researcher of academic cheating put it, ‘Morality does not seem to be a major influence on student decisions to cheat or not to cheat’ (p. 444, McCabe, Reference McCabe1997; see also Stephens, Reference Stephens2017).

But the theoretical implications of cheating hinge on the psychological realities of cheating. Do students who cheat readily turn off, or ‘neutralize’, their moral principles when they want to cheat, as some have argued (Bandura, Reference Bandura2016; Haines et al., Reference Haines, Diekhoff, LaBeff and Clark1986; Stephens, Reference Stephens2017; Sykes & Matza, Reference Sykes and Matza1957)? If so, moral principles would indeed be weak guides for behavior, and reasoning would be peripheral to how people form judgments and decisions about cheating. Alternatively, do students use principled reasoning that leads them to judge that cheating is permissible under some circumstances, whether it is their own cheating or someone else’s cheating? If so, cheating would not imply a disconnect between genuine moral reasoning and student actions; instead, it would exemplify how moral reasoning can involve making exceptions to general principles under special circumstances (Dahl et al., Reference Dahl, Gingo, Uttich and Turiel2018; Waltzer & Dahl, Reference Waltzer, Dahl, Rettinger and Gallant2022).

To examine the latter proposal—that reasoning is central to students’ judgments and decisions about cheating—the present research tested three critical hypotheses: (1) Many, though not all, students condemn their own acts of cheating (first-party perspective); (2) in situations where students judge their own cheating as okay, other students not involved in the act tend to agree that the act is okay (third-party perspective); and (3) the reasons students provide for their first-party judgments reference features that shape others’ third-party judgments about those same situations. (For instance, if students say their own cheating was okay because of a family obligation, the presence of that family obligation will make others more accepting of the cheating from a third-party perspective.) We tested these three hypotheses in three studies of undergraduate students in the United States.

1.1. Academic cheating: A consequential and common ethical challenge

In this paper , we define academic cheating as an academic action that violates institutional rules and that would, if carried out successfully, yield academic advantages to one or more involved students (Barnhardt, Reference Barnhardt2016; Murdock et al., Reference Murdock, Stephens, Grotewiel, Brown and Harris2016; Waltzer & Dahl, Reference Waltzer and Dahl2023). Examples of cheating include bringing prohibited crib notes into an examination or giving homework answers to a classmate (Waltzer & Dahl, Reference Waltzer and Dahl2023). Academic cheating offers an ideal context to study how reasoning about right and wrong shapes everyday judgments.

In the eyes of most educators and scholars, cheating undermines the core values of educational institutions, puts students at risk of missing out on learning opportunities, and even threatens students’ long-term academic careers through suspension or expulsion (Bretag, Reference Bretag2020; Cizek, Reference Cizek2003; McCabe et al., Reference McCabe, Butterfield and Treviño2012; Miller et al., Reference Miller, Shoptaugh and Wooldridge2011). Despite these downsides, the vast majority of students—over 90 percent by most estimates—cheat during their academic careers (Davis et al., Reference Davis, Grover, Becker and McGregor1992; McCabe et al., Reference McCabe, Butterfield and Treviño2012; Waltzer & Dahl, Reference Waltzer and Dahl2023; Yardley et al., Reference Yardley, Rodriguez, Bates and Nelson2009). Reflecting its practical and historical significance, cheating has inspired a century or more of psychological research (Bok, Reference Bok1978; Drake, Reference Drake1941; Hartshorne & May, Reference Hartshorne and May1928; McCabe et al., Reference McCabe, Butterfield and Treviño2012): Why do students seemingly act in violation of their moral principles, cheating even though they must know that cheating is wrong?

1.2. Moral psychological research on cheating

One reason why debates about cheating have remained unresolved, we contend, is that few studies have examined the psychology behind students’ everyday decisions to cheat. Specifically, little research has investigated how, if at all, moral reasoning shapes students’ judgments and decisions about cheating. Thus, we lack knowledge of students’ judgments and reasoning about their own acts of cheating, as well as how such first-party views on one’s own cheating compare to other students’ third-party views about similar situations.

Third-party judgments hold a special significance in research on morality, as they distinguish genuine moral judgments from self-interested judgments. If I accept my own act of cheating, I might be doing so out of self-interest. But if, from a third-party perspective, I accept the cheating of another, unfamiliar student in that very same situation, it suggests that I truly think the act is morally permissible (Killen & Dahl, Reference Killen, Dahl, Gray and Graham2018; Rawls, Reference Rawls1971; Scanlon, Reference Scanlon1998). The present research examines students’ first- and third-party evaluations and reasoning about the specific acts of cheating they encounter in everyday life.

If everyone in all situations judged cheating as wrong from a third-party perspective, it might not be necessary to study judgments and reasoning about specific acts of cheating. We would know from the outset that anybody who did not personally benefit from an act of cheating would judge that act to be wrong. However, third-party evaluations of cheating and other forms of dishonesty vary substantially across situations (Jensen et al., Reference Jensen, Arnett, Feldman and Cauffman2002; Waltzer et al., Reference Waltzer, Samuelson and Dahl2022; Waltzer & Dahl, Reference Waltzer and Dahl2021; Yachison et al., Reference Yachison, Okoshken and Talwar2018). In some cases, lying is tolerated or even encouraged (e.g., prosocial lies; Erat & Gneezy, Reference Erat and Gneezy2012; Levine et al., Reference Levine, Kim and Hamel2010; Zhao et al., Reference Zhao, Heyman, Chen, Sun, Zhang and Lee2019). To take an extreme example: Most people probably approve of using deception to save innocent people from harm, as many heroic individuals did to rescue Jews during the Holocaust (Oliner & Oliner, Reference Oliner and Oliner1988; Turiel, Reference Turiel2002). Because of such situational variability in judgments about cheating, we cannot know how people judge specific cases of academic cheating if we rely solely on the kinds of general questions (e.g., ‘is cheating wrong?’) typically used in prior work (e.g., Davis et al., Reference Davis, Grover, Becker and McGregor1992; see Barnhardt, Reference Barnhardt2016; Bouville, Reference Bouville2010). The present research examined how students reason and make judgments about a variety of specific cheating events.

1.3. To what extent does moral reasoning guide judgments and decisions about cheating?

A common explanation for why students cheat is that students usually neutralize, or morally disengage from, their own acts of cheating (Bandura, Reference Bandura2016; Haines et al., Reference Haines, Diekhoff, LaBeff and Clark1986; McCabe et al., Reference McCabe, Butterfield and Treviño2012; Stephens, Reference Stephens2017; Sykes & Matza, Reference Sykes and Matza1957). Neutralization and moral disengagement refer to processes by which ‘moral self-censure can be disengaged from reprehensible conduct’ (Bandura, Reference Bandura2002, p. 102). Individuals condone behaviors they would ordinarily condemn, for instance, because those behaviors serve their self-interest. We will call this the reasoning-peripheral view. This view predicts (1) that students typically judge their own acts of cheating as permissible; (2) that other students, who do not personally benefit from those acts, would judge those same acts more harshly; and (3) that the reasons students provide for their judgments largely serve to excuse, rather than guide, decisions.

This reasoning-peripheral view on cheating draws from intuitionist accounts of moral psychology, according to which most moral judgments stem from affective reactions, or intuitions, and not from reasoning (Greene, Reference Greene2013; Haidt, Reference Haidt2001, Reference Haidt2012). These accounts have proposed that people are largely unaware of why they make the moral judgments that they do. Most of the time, people who form moral judgments are said to be either ‘morally dumbfounded’—unable to explain the basis for their judgments—or only have access to post-hoc rationalizations—reasons unrelated to how the judgments were formed in the first place (Cushman, Reference Cushman2020). According to Haidt (Reference Haidt2007), ‘moral reasoning, when it occurs, is usually a post-hoc process in which we search for evidence to support our initial intuitive reaction’ (p. 998).

This paper tests the predictions of an alternative, reasoning-central account of student cheating (for related discussion, see Jacobson, Reference Jacobson and Timmons2012). Our approach proposes that reasoning about moral principles is central to the formation of moral judgments (for similar views, see Campbell & Kumar, Reference Campbell and Kumar2012; Landy & Royzman, Reference Landy, Royzman and Pennycook2018; Nucci & Gingo, Reference Nucci, Gingo and Goswami2011; Royzman et al., Reference Royzman, Kim and Leeman2015; Turiel, Reference Turiel, Smith, Rogers and Tomlinson2003). Building on recent work in moral development and cognition, we define moral reasoning as transitions in thoughts in accordance with moral principles that the individual can articulate and endorse (Adler & Rips, Reference Adler and Rips2008; Dahl & Killen, Reference Dahl, Killen, Wixted and Ghetti2018; Harman, Reference Harman1986; Killen & Dahl, Reference Killen and Dahl2021). By moral principles, we mean general considerations about how to protect and promote others’ welfare, rights, and justice (Dahl & Killen, Reference Dahl, Killen, Wixted and Ghetti2018).

This account offers an explanation for why students cheat that is grounded in principled reasoning. According to the reasoning-central framework, people reason about multiple, sometimes competing, principles when they encounter issues as complex as cheating (Campbell & Kumar, Reference Campbell and Kumar2012; Dahl et al., Reference Dahl, Gingo, Uttich and Turiel2018; Nucci et al., Reference Nucci, Turiel and Roded2017; Turiel & Dahl, Reference Turiel, Dahl, Bayertz and Roughley2018). A student may value academic integrity but also hold that they are obligated to help a family member in need. When these principles conflict, as when a family member is admitted to a hospital the same night an assignment is due, a student who thinks cheating is generally wrong may nonetheless judge that they have the right to cheat in order to help their family member (DeBernardi et al., Reference DeBernardi, Waltzer and Dahl2021). The implication is not that the student is unconcerned with academic integrity; rather, the implication is that academic integrity is not the student’s only concern (Waltzer et al., Reference Waltzer, Samuelson and Dahl2022; Waltzer & Dahl, Reference Waltzer, Dahl, Rettinger and Gallant2022).

The comparison of first- and third-party perspectives is critical for distinguishing the predictions of the reasoning-peripheral and reasoning-central views. The reasoning-peripheral view proposes that people can readily turn off, disengage, or neutralize their moral principles when they transgress. Doing so would allow people to transgress without feeling bad about themselves. If so, people’s (first-party) evaluations and reasoning about their own acts of cheating should differ fundamentally from people’s (third-party) evaluations and reasoning about other people’s acts of cheating. In the first-party case, their evaluations and reasoning would be molded to fit their own self-interest and preserve their self-image. In the third-party case, when people are unhindered by self-interest, they would be able to apply their disinterested moral views. The reasoning-central view, on the other hand, predicts that first- and third-party evaluations and reasoning are quite similar, insofar as they have access to the same information, since people would be applying the same principles to themselves as they would to others.

To test this framework, which treats moral reasoning as central to judgments and decisions, the present research examined three main questions.

1.4. Three questions about the role of reasoning in judgments about cheating

1.4.1. To what extent do people neutralize their past acts of cheating?

Nearly all students cheat at least once during their academic careers, even though they judge that cheating is generally wrong (Brown, Reference Brown2002; Davis et al., Reference Davis, Grover, Becker and McGregor1992; McCabe et al., Reference McCabe, Butterfield and Treviño2012). According to the reasoning-peripheral perspective, when students cheat, they readily turn off their principles against cheating to avoid feeling bad about their own actions. Many scholars have proposed students use techniques of neutralization to construe their actions in a positive light so that they can commit violations without taking responsibility for their actions (see Ariely, Reference Ariely2012; Bandura, Reference Bandura2016; Haines et al., Reference Haines, Diekhoff, LaBeff and Clark1986; Stephens, Reference Stephens2017; Sykes & Matza, Reference Sykes and Matza1957). From this perspective, because students can turn off their moral principles so easily and continue to see themselves in a positive light no matter their wrongdoings, principled reasoning imposes little constraint on students’ decisions to cheat. These reasoning-peripheral accounts of moral decision-making would predict that people rarely regret or take blame for their own past violations (Dahl & Waltzer, Reference Dahl and Waltzer2018).

In contrast, the reasoning-central approach proposes that people often experience regret and acknowledge responsibility after having done what they deem to be wrong (Dahl et al., Reference Dahl, Gingo, Uttich and Turiel2018; Turiel & Dahl, Reference Turiel, Dahl, Bayertz and Roughley2018). For instance, a student considering whether to secretly share test answers with a friend might prioritize prosocial concerns over honesty and thus decide to cheat in that situation (Levine et al., Reference Levine, Kim and Hamel2010; Waltzer & Dahl, Reference Waltzer and Dahl2023; Zhao et al., Reference Zhao, Heyman, Chen, Sun, Zhang and Lee2019). Still, that student could remain concerned with the dishonesty of their action, evaluate it negatively, and even regret that they cheated. Here, the role of reasoning is genuinely guiding judgments as opposed to serving selfish purposes.

In the present research, we examined people’s evaluations of their own cheating actions. In line with our approach, we expected most students to take responsibility for and condemn their own acts of cheating, rather than ‘neutralizing’ them.

1.4.2. Do people judge others’ acts of cheating more harshly than their own acts?

A second key question pertains to the role of first- versus third-party perspectives in moral judgments. Reasoning-peripheral accounts of moral decision-making assume that people will judge their own actions from a first-party perspective (evaluating their own actions) generally more favorably than they would judge someone else committing the same action from a third-party perspective (evaluating others’ actions). This position dominates in social psychological research on judgments and reasoning (Batson, Reference Batson, Sinnott-Armstrong and Miller2017; Jones & Nisbett, Reference Jones, Nisbett, Jones, Kanouse, Kelley, Nisbett, Valins and Weiner1972; Pronin et al., Reference Pronin, Gilovich and Ross2004). In this view, differences in perspective are driven by selfish biases, motivated reasoning, or attempts to elevate oneself over others (Batson et al., Reference Batson, Kobrynowicz, Dinnerstein, Kampf and Wilson1997; Jordan & Monin, Reference Jordan and Monin2008; Krebs & Laird, Reference Krebs and Laird1998; Valdesolo & DeSteno, Reference Valdesolo and DeSteno2007). A key limitation of past research on first- and third-party moral judgments is that this work has rarely compared judgments about the same events. Unless researchers compare first- and third-party judgments about the same events, it is difficult to know whether any measured differences between first- and third-party judgments reflect a first-party bias or whether they are simply judgments about different events (Gold et al., Reference Gold, Pulford and Colman2015; Nadelhoffer & Feltz, Reference Nadelhoffer and Feltz2008).

In the present research, we assessed how students evaluated their own cheating acts in relation to their peers’ evaluations of the same acts. Based on our proposal that moral judgments are reason-driven, consistent, and built on the available information, we expected first- and third-party judgments about the same cheating acts to be sensitive to the same features. Thus, we expected first-party judgments to align with third-party judgments.

1.4.3. Do people articulate reasons that account for judgments about cheating?

Our third question is whether the reasons people provide to justify their judgments do indeed give rise to those judgments. According to a reasoning-peripheral view of decision-making, moral judgments do not usually come from reasoning about moral principles but from amoral kinds of self-interest or affective reactions. Since moral reasoning is said to play little role in the formation of judgments, these accounts propose that people can struggle to articulate reasons for their judgments—a phenomenon known as moral dumbfounding (Haidt, Reference Haidt2001; see Royzman et al., Reference Royzman, Kim and Leeman2015). Moreover, when people do provide reasoning for their judgments, those judgments are hypothesized to be mere post-hoc rationalizations that are disconnected from the processes that generated the judgments in the first place (Cushman, Reference Cushman2020; Schwitzgebel & Ellis, Reference Schwitzgebel, Ellis, Bonnefon and Trémolière2017; Sharot et al., Reference Sharot, Velasquez and Dolan2010; Vinckier et al., Reference Vinckier, Rigoux, Kurniawan, Hu, Bourgeois-Gironde, Daunizeau and Pessiglione2019).

In contrast, the reasoning-central approach proposes that people are typically able to articulate and endorse the principles that actually guide their judgments (Dahl et al., Reference Dahl, Gingo, Uttich and Turiel2018; Nucci & Gingo, Reference Nucci, Gingo and Goswami2011; Turiel, Reference Turiel, Smith, Rogers and Tomlinson2003; Waltzer & Dahl, Reference Waltzer, Dahl, Rettinger and Gallant2022). For example, consider the student who judged they had the right to cheat because they needed to help a family member in the hospital. We predict that the student would, first, be able to state this reason afterward and, second, judge that it would have been wrong to cheat had the family member not been in the hospital. In short, we expect that the reasons people give in a research interview for their judgments will be causally operative reasons that generally align with their patterns of judgments.

In the present research, we elicited students’ reasons for their judgments about cheating events and then tested whether altering those features would change other students’ judgments. We propose that people can largely articulate the reasons that genuinely guide choices about what is right or wrong. Thus, we expected manipulating those reasons to influence judgments about the acts.

1.5. The present research

Across three studies, we tested key predictions of the reasoning-central account about people’s moral reasoning and judgments about their own and others’ actions. Our investigation focused on college students’ views on the kinds of cheating that occur in their lives. Academic cheating offers a useful context for studying the relationships between reasoning and evaluations from first- and third-party perspectives. The reasoning-central account, like the reasoning-peripheral account, purports to explain common psychological phenomena. It is therefore desirable to study behavior that most of the study population has regular encounters with, from first- and third-party perspectives (Dahl, Reference Dahl2017; Graham, Reference Graham2014; Hofmann et al., Reference Hofmann, Wisneski, Brandt and Skitka2014; McCall, Reference McCall1977; Mikula et al., Reference Mikula, Petri and Tanzer1990). Cheating is just that. Virtually all students face decisions about whether to cheat and about how to respond to others’ cheating (Waltzer et al., Reference Waltzer, Samuelson and Dahl2022; Waltzer & Dahl, Reference Waltzer and Dahl2021). Of course, cheating is not representative of all moral issues. The present paradigm should be expanded to examine other moral phenomena as well, a point we return to in the General Discussion.

The present studies used interviews and online surveys about real and realistic cheating events to address three interrelated research questions (Table 1). Together, these three studies build on one another to advance the debate on the role of reasoning in moral judgments.

Table 1 Summary of central questions and hypotheses across Studies 1, 2, and 3

In Study 1, we interviewed students about their own cheating acts and measured their tendency to neutralize those acts. In Study 2, we adapted the real-life cheating acts described in Study 1 into hypothetical scenarios, presented them to a new set of participants, and prompted them for their third-party evaluations and reasons. In Study 3, we manipulated the situational features mentioned as reasons by participants in Studies 1 and 2 to see whether these features in fact had an effect on judgments about cheating.

2. Study 1: first-party judgments about past cheating events

2.1. Method

In Study 1, we used structured interviews with college students to elicit first-party descriptions and judgments about their own past cheating acts.

2.1.1. Participants

Undergraduate students (N = 60, 43 women, 16 men, and 1 nonbinary; M age = 19.63; SDage = 1.40) were recruited using a subject pool at a large public university in the Western United States. Students received class credit for their participation.

2.1.2. Materials and procedure

Students participated in an in-lab, audio-recorded structured interview. Audio recordings were masked and transcribed to protect confidentiality. As part of a larger study on cheating in college, participants were prompted to discuss their own past experiences with cheating (Waltzer & Dahl, Reference Waltzer and Dahl2023). The larger study included questions that were beyond the scope of the current research. Here, we will describe the aspects of those interviews that are directly relevant to the present study. Participants were asked to describe the most recent time they did something that could count as cheating. They were prompted for details about the type of assignment, the class, their school year when it occurred, and the act of cheating itself, such as who benefitted from the cheating. After participants had described the event, they were asked questions about their evaluations and perceptions of what they had done (Table 2).

Table 2 Prompts used in structured interview about personal experience in Study 1

a Because of a procedural change, this prompt was only presented to 32 participants

Participants’ reasoning (responses to the ‘why’ question) was coded and analyzed, but because these data were not central to the aims of Study 1, they will be discussed more in-depth in Study 2. Because this was part of a larger study, some of the data are reported elsewhere (i.e., some of the participants’ perceptions and evaluations at the time of the event; Waltzer & Dahl, Reference Waltzer and Dahl2023).

2.2. Results

The two main goals of Study 1 were to examine whether students might seek to neutralize their own acts of cheating and to generate real-life cheating events that could be tested in Study 2. Below, we describe students’ perceptions and evaluations of their actions. For additional analyses, see the Supplementary Online Materials (SOM, https://osf.io/fn8ys/).

2.2.1. Perceptions of whether the act constituted cheating

Most participants described actions they perceived as cheating at the time of the interview (75%). The remaining participants described actions that they thought might have constituted cheating in the eyes of their teacher, even though the participants themselves did not see it as cheating (e.g., collaborating on an assignment when the teacher did not state whether collaboration was allowed). Still, far more participants perceived their actions as cheating at the time of the interview (45 out of 60 participants, 75%, 95% CI: [62%, 85%]) than at the time of the event (29 out of 60 participants, 48%, 95% CI: [35%, 61%]).

2.2.2. Evaluations about whether the act was okay

Nearly half of the participants (42%) reported that they had judged their actions as wrong (i.e., ‘not okay’) at the time of the event. Their evaluative ratings at the time of the event fell around the middle of the scale (0 = ‘Really bad’ to 10 = ‘Really good’), with an average of 5.10 (SD = 2.68). Our first question was whether participants still judged the act negatively at the time of the interview, or whether they had ‘neutralized’ the act. Contrary to the idea that students would neutralize their actions retrospectively, participants judged their actions as wrong more often and rated their actions more negatively at the time of the interview (41 out of 60 participants judged the act as wrong, 68%, 95% CI: [55%, 79%]; M rating = 3.60, 95% CI: [2.94, 4.26]) than at the time of the event (25 out of 60 participants judged as wrong, 42%, 95% CI: [29%, 55%]; M rating = 5.10, 95% CI: [4.40, 5.81]).

2.2.3. Responsibility judgments

Due to a procedural change, only half of the participants were asked who they thought was responsible for what happened. Most of them took responsibility for their actions, naming themselves partly or entirely responsible for what happened (28 out of 32 participants, 88%, 95% CI: [70%, 96%]).

2.2.4. Alternative actions

About half of participants said they would do things differently in retrospect if they could go back and change anything (30 out of 56 participants, 54%, 95% CI: [40%, 67%]).

2.3. Discussion

The results from Study 1 suggest that, in many cases, students do view their past acts of cheating negatively, contrary to predictions from reasoning-peripheral accounts of moral psychology. Students vividly described their past violations, contrary to the idea of motivated forgetting, wherein people selectively remember their positive actions and forget the bad things they have done (Helzer & Dunning, Reference Helzer, Dunning, Vazire and Wilson2012; Pronin et al., Reference Pronin, Gilovich and Ross2004; Stanley & De Brigard, Reference Stanley and De Brigard2019). Most participants took responsibility for what happened and said that they would have wanted to do things differently. And contrary to what a neutralization account would have predicted, participants judged their actions as wrong more often at the time of the interview than at the time of the event, not less often, suggesting that they did not simply excuse their behaviors in retrospect (Bandura, Reference Bandura2016; Fleischhut et al., Reference Fleischhut, Meder and Gigerenzer2017; Shu et al., Reference Shu, Gino and Bazerman2011).

Still, the findings indicate students do find some acts of cheating acceptable: At the time of the interview, about one-third judged that their actions had been okay. Did those students neutralize their actions to make themselves feel better about cheating? Or did their judgments stem from information about the acts and situations that would lead most people to similarly conclude cheating is acceptable in those cases? To answer this, we needed to look at third-party reactions to the same cheating events.

In Study 2, we generated third-party scenarios based on the sixty descriptions of academic misconduct provided by Study 1 participants. By showing these events to a new set of participants, Study 2 could directly examine the role of first- and third-party perspectives in evaluating the same actions, addressing our second research question. A key question was whether first-party judgments and reasoning would predict third-party judgments and reasoning.

3. Study 2: First- and Third-Party Judgments of Cheating Events

3.1. Method

In Study 2, we developed new scenarios based on the events described in Study 1. We compared the first-party responses from Study 1 to the third-party responses in Study 2, collected from a new sample of participants.

3.1.1. Participants

We recruited a new sample of undergraduate students from the same subject pool (N = 60, 40 women, 18 men, and 2 nonbinary; M age = 19.85; SD age = 1.98) to participate in interviews about the types of actions described by participants in Study 1.

3.1.2. Materials and procedure

Participants came to the laboratory for an in-person structured interview, following a similar procedure for consent, audio-recording, prompting, and debriefing as in Study 1.

The new sample of participants recruited for Study 2 (hereafter referred to as third-party respondents) were asked to evaluate the actions of the Study 1 participants (first-party respondents). We adapted the described cheating actions from Study 1 into scenarios to be used in Study 2 (N = 60 scenarios). We sought to preserve as much information from the original events as possible, except that all personally identifiable information was removed. The name of the protagonist was chosen to resemble the first-party respondent in gender and cultural-linguistic background (e.g., ‘Peter’ for ‘John’, ‘Juan’ for ‘Carlos’). Descriptive features of the events were also categorized and captured in each scenario (e.g., class type, the year they were in school, and who benefitted from the cheating). Each scenario had two parts: The first contained descriptive information about what happened, and the second summarized the protagonist’s reasons for their act. The reasons were derived from the first-party respondent’s interview responses, using actual words and ideas paraphrased from the interview. For consistency, scenarios were restricted to brief passages (M = 224 words, median = 227, range: 134 to 278). Below is an example scenario used in Study 2:

‘When Chen was in his sophomore year of high school, he was in an English class. In this class, students were assigned to write an essay. Chen was given a specific topic, and he looked online for information that was relevant to the topic. He copied pieces here and there from Wikipedia and changed a few words. Even though Chen tried to reorder the words, some parts still matched the online source he used.

Chen did what he did because he felt that he was just lazy. He did not really take classes seriously because he had no interest. At the time, Chen also believed that he was bad at writing. He felt that he just needed to finish the assignment and turn it in. Because Chen had not been taught about plagiarism, he did not know what plagiarism meant’.

Each participant was presented with eight stories. The 60 stories were counterbalanced so that each scenario was presented to eight different participants. After reading the scenario, participants were asked to briefly summarize what happened. They were then asked, ‘Was what [name] did in this story OK or not OK?’ and their reasoning was further prompted (‘Why/Why not?’). They also rated the action on an 11-point scale from 0 (‘really bad’) to 10 (‘really good’) and indicated whether they believed the act counted as cheating. Lastly, they completed a demographic questionnaire (see SOM, https://osf.io/fn8ys). Audio recordings were transcribed for coding.

3.1.3. Data coding

The transcribed participant responses were separated into individual statements (i.e., complete sentences or ideas) for coding. Two coders independently classified statements based on coding schemes for judgment, uncertainty, and type of reason. These categorization schemes were developed through a mix of bottom-up and top-down approaches. Members of the research team reviewed a subset of the data to deductively generate categories that captured common types of responses (bottom-up approach). Meanwhile, theoretically relevant categories (e.g., affect others) were also added (top-down approach). Reliability was assessed by computing Cohen’s kappa scores ( $\unicode{x3ba}$ ) for both coders’ categorization of a random subset of the data (20% of all responses, McHugh, Reference McHugh2012).

Judgment. We coded any statements that indicated approval or disapproval of the target action as containing a judgment. The initial response to the ‘OK’ question was recorded as the participant’s overall judgment. Participants followed up on their overall judgments by providing reasoning (‘why’ question): These responses were typically multifaceted and often acknowledged why someone might give the opposite judgment. Thus, judgment codes were assigned to every individual statement, starting from the overall judgment. Judgments were marked with a binary code for each statement (okay or not okay). Agreement was high (κ = .83).

Reason. As mentioned, participants were prompted to explain their judgments (e.g., why they thought the protagonist’s action was okay). All statements that contained a judgment were further categorized into different justifications in support of those judgments, including moral (e.g., fairness) and pragmatic (e.g., affect agent) concerns (Table 3). Agreement was high (κ = .84).

Table 3 Summary of reason coding scheme used in Study 2

3.1.4. Data analysis

The goal of Study 2 was to examine whether third-party respondents’ perceptions, evaluations, and reasoning about cheating acts would align with those of the first-party respondents. For example, would acts deemed acceptable by first-party respondents also tend to be accepted by third-party respondents who were uninvolved in the events? To test this, we compared the third-party responses from Study 2 to the first-party responses from Study 1. To make judgments maximally comparable, we used judgments at the time of the interview (not at the time of the event) from Study 1.

3.2. Results

3.2.1. First- and third-party perceptions of described cheating events

Third-party perceptions of cheating aligned with first-party perceptions. In most cases, the third-party respondent agreed with the first-party respondent about whether the act counted as cheating (322 out of 480 scenarios, 67%).

3.2.2. First- and third-party evaluations of described cheating events

First-party and third-party respondents’ judgments of whether the act was okay aligned in the majority of cases. Third-party respondents judged the act the same way as the first-party respondents did in most cases (283 out of 480 cases, 59%). For evaluative ratings, we correlated the first-party rating for each scenario with the average rating from all third-party respondents who rated the same scenario. Contrary to expectations, there was no significant correlation between the first-party evaluative rating and the average third-party rating for that scenario, Pearson’s r = .13, p = .339, 95% CI: [-.14, .37]. (Since each third-party respondent contributed multiple ratings, the data points were not strictly independent; still, we include the correlation coefficient to show the fairly weak relationship between first- and third-party ratings.) First-party respondents rated cheating acts slightly more negatively (M = 2.68, 95% CI: [3.81, 4.42]) than the third-party respondents (M = 3.32, 95% CI: [2.94, 4.26]).

3.2.3. Reasons mentioned by first- and third-party respondents

Because of procedural differences between Study 2 and Study 1, third-party respondents generally mentioned more reasons than first-party respondents (see SOM, https://osf.io/fn8ys). To visualize patterns across the two studies, we standardized reasons by presenting the frequency of each category as a proportion of the total number of categories mentioned across events (Figure 1).

Figure 1 Summary of reasons provided by first-party respondents and third-party respondents, grouped by whether they were justifications for okay or not okay judgments. Standardized as proportions of all reason categories mentioned.

The reasoning of third-party respondents resembled the reasoning of first-party respondents, suggesting people drew on similar principles regardless of their personal involvement in the events. The three most common reasons provided by first-party respondents—academic misconduct, affect agent, and rules—were also the most common reasons provided by third-party respondents. The pattern was similar for okay and not okay judgments.

3.3. Discussion

As expected, third-party responses to cheating events in Study 2 largely resembled first-party responses to those same events from Study 1. These similarities between first- and third-party respondents were evident for perceptions, evaluations, and reasoning.

The comparison of evaluations in Studies 1 and 2 gave no indication that self-interest drove participants in Study 1 to be more lenient about their own cheating. In fact, our analyses of evaluative ratings found that first-party respondents evaluated their own cheating acts more harshly than third-party respondents. This suggests that, when they are given enough relevant information to understand an event, people are willing to make moral exceptions to the general prohibition against cheating for others as well as for themselves.

First- and third-party respondents also referenced similar types of reasons, suggesting there were things about the situations that people tended to consider regardless of their position. Responses from both sets of participants suggested that judgments about cheating derive from reasoning about many distinct features, such as academic rules, fairness, and the importance of learning and effort, reflecting concerns about cheating that students in prior studies have also expressed (Waltzer & Dahl, Reference Waltzer and Dahl2021, Reference Waltzer and Dahl2023). These reasons point to the variety of challenges students grapple with when they face complex situations involving cheating. The first- and third-party evaluations and reasons were not perfectly identical of course, nor could they be. After all, the first-party respondents had access to more information: They experienced the full event, whereas third-party respondents only read a brief description.

Studies 1 and 2 showed that individuals can provide reasons for their first- and third-party judgments. However, this finding alone does not show that those reasons guided the formation of those judgments. It is conceivable that participants invented post-hoc reasons that bore no relation to the factors that formed their judgments in the first place. To test our proposition that participants’ reasons in Studies 1 and 2 referenced features that actually influenced their judgments, we had to manipulate those features and see whether the judgments changed accordingly. That was the purpose of Study 3.

4. Study 3: Testing whether people’s judgments are sensitive to the features they claim to reason about

Moral psychologists have debated about post-hoc rationalization for over two decades (Cushman, Reference Cushman2020; Dahl & Waltzer, Reference Dahl and Waltzer2020; Haidt, Reference Haidt2001). According to the reasoning-peripheral view, the reasons people provide for their evaluations are usually post-hoc rationalizations that have nothing to do with how the evaluations were generated in the first place (Cushman, Reference Cushman2020; Haidt, Reference Haidt2007; Schwitzgebel & Ellis, Reference Schwitzgebel, Ellis, Bonnefon and Trémolière2017; Sharot et al., Reference Sharot, Velasquez and Dolan2010; Vinckier et al., Reference Vinckier, Rigoux, Kurniawan, Hu, Bourgeois-Gironde, Daunizeau and Pessiglione2019). In contrast, according to the reasoning-central view, in research interviews, people usually provide the reasons that actually shaped their judgments (Dahl et al., Reference Dahl, Gingo, Uttich and Turiel2018).

Consider reasons coded as effort. Many third-party respondents explained that the act in the scenario was okay because the hypothetical student put in a lot of effort (e.g., they worked hard to complete the assignment). The reasoning-peripheral view asserts that such reasons are likely post-hoc rationalizations unrelated to how participants formed their judgments. In that case, if we manipulated the features of the situation so that the hypothetical character put in very little effort, participants’ judgments should remain unchanged. The reasoning-central view makes the opposite prediction. It proposes that participants who provided effort reasons actually reasoned about effort when making their judgment. Accordingly, the reasoning-central view predicts that participants’ judgments should change when the character’s effort changes, and they would judge the act less favorably if the character put in less effort.

Study 3 implemented this logic to test the competing predictions from the reasoning-peripheral and reasoning-central views. We manipulated the features of the scenarios that third-party respondents in Study 2 referenced in their reasons. (Reasons in Study 1 and Study 2 were similar, but we had more data available in Study 2, so we used reasons from Study 2 to generate scenarios for Study 3.)

Based on the reasoning-central view, we predicted that manipulating the features referenced by Study 2 participants would affect the judgments of Study 3 participants. We expected that features mentioned in favor of cheating (for cheating) in Study 2 would lead Study 3 participants to evaluate cheating more positively, and we expected that features mentioned against cheating in Study 2 (against cheating) would lead Study 3 participants to evaluate cheating more negatively. To exemplify: When participants in Study 2 said that the character’s action was okay because the character put in a lot of effort, we created a variant in Study 3 where the character did not put in such effort. We expected this to make participants judge the act more negatively. Conversely, when participants in Study 2 said the act was wrong because the teacher had prohibited collaboration, we created a variant where the teacher now permitted collaboration. We expected this to make participants judge the act more positively. In both cases, if Study 2 participants were post-hoc rationalizing, the manipulations should—on average—have no effect on Study 3 participants’ evaluations. (By recruiting a new sample of participants for Study 3, instead of manipulating the scenarios in Study 2, we avoided the concern that participants would just change their judgments to remain consistent with their stated reasons.)

4.1. Method

Scenarios for Study 3 were modified using reasons derived from participants’ responses in Study 2. The modifications either added a reason for or against committing the act in each scenario. These modifications were based on the most common categories of reasons third-party respondents used to justify why the act in that particular scenario was okay or not okay.

4.1.1. Participants

We recruited 98 undergraduate students (59 women, 28 men, and 11 nonbinary; M age = 21.05; SD age = 3.15) from the same university subject pool as in Studies 1 and 2 to participate in an online survey about academic situations.

4.1.2. Materials and procedure

Scenario Development. We first identified the most common reasons third-party respondents gave for and against the actions in Study 2 (Table 4). (We excluded academic misconduct and features of assignment because we could not manipulate these features without fundamentally altering the action in question: If Study 2 participants said an act was wrong because it was cheating, it would not be very interesting to interview Study 3 participants about the same act labeled as ‘not cheating’.) The four most frequently mentioned reasons in favor of an act were affect agent, effort, rules, and learning. And the four most frequently mentioned reasons against an act were rules, affect agent, fairness, and honesty.

Table 4 Examples of stimuli phrases used in Study 3 and frequencies from Study 2

For each of these eight types of reasons for and against cheating, we selected two scenarios out of the original 60 scenarios from Study 2. The chosen scenarios were those that had the most third-party respondents giving that type of reason for or against the act. Ties were settled by picking whichever scenario had the highest (for cheating stimuli) or lowest (against cheating stimuli) average evaluative rating. For example, when deciding on the stories to manipulate the feature rules, three scenarios all had five out of eight participants mentioning rules as a reason against the action. This tie was settled by selecting the two stories with the lowest average ratings. This process yielded 16 different cheating events (eight types of reasons x two events for each type). Specifically, these consisted of four affect agent, two effort, two fairness, two honesty, two learning, and four rule events. We created two variant scenarios for each event: one for cheating variant, where the relevant feature was manipulated in favor of cheating, and one against cheating variant, where the relevant feature was manipulated against cheating. That yielded a total of 32 scenarios for Study 3. Table 4 shows examples of the phrases used to manipulate the features.

Procedures. Participants responded to a 30-minute online survey administered through Qualtrics. In a within-subjects design, each participant read and evaluated all 16 scenarios, presented in a randomized order for each participant. For each of the scenario types, the survey randomly displayed either the for or against cheating variant of that scenario. This was a between-subject manipulation, so that each participant saw only one variant scenario for each of the 16 events. After reading each scenario, participants judged whether the protagonist’s act was okay or not okay and evaluatively rated the action (from 0 = ‘Really bad’ to 10 = ‘Really good’), as in Studies 1 and 2. After participants selected a judgment of ‘okay’ or ‘not okay’, they were asked, ‘why do you think so?’ and they were given six multiple-select options to choose as reasons justifying their judgments (Table 5). Establishing reasoning categories from Studies 1 and 2 and then using checkboxes to measure them in Study 3 meant that participants could provide their reasoning quickly and easily, and therefore, we did not need to use any open-ended boxes to measure reasoning. Demographic information was collected at the end of the survey (see SOM, https://osf.io/fn8ys).

Table 5 Evaluative prompts used in Study 3, presented following each scenario

4.1.3. Data analysis

To assess whether the features we manipulated in the Study 3 scenarios would influence evaluations, we tested our key hypotheses by comparing responses to the for cheating vs. against cheating variants of each event. For okay judgments, we used Fisher’s exact tests to compare the two variants of each event. For evaluative ratings, we used independent-samples t-tests. We also wanted to test whether participants reasoned about the relevant features in the scenarios. To do this, we used generalized linear mixed models (GLMMs; Hox, Reference Hox2010) to predict whether each type of reason (e.g., learning) was selected as a function of scenario type (e.g., learning scenario vs. not learning scenario). Models also included random intercepts for participants and scenarios. We used likelihood ratio tests on model deviance to test for significance.

4.2. Results

4.2.1. Effect of manipulated features on evaluations

Okay Judgments. Overall, participants more often judged the act as okay when they saw the for cheating variant (61%, 95% CI: [58%, 65%]) than when they saw the against cheating variant (33%, 95% CI: [30%, 37%]). Figure 2 illustrates the effect of the manipulated features in each of the 16 scenarios that participants evaluated.

Figure 2 Judgments about whether the action was okay, grouped by the manipulated types of reasons for or against the act in each scenario. On the x-axis, the number represents different scenarios (e.g., there were four different affect agent events).

Evaluative Ratings. Similarly, the for cheating variant scenarios elicited more positive ratings (M = 5.62, 95% CI: [5.45, 5.79]) than the against cheating variants (M = 4.46, 95% CI: [4.31, 4.61]). Figure 3 depicts the effects of the manipulated features on ratings for each scenario.

Figure 3 Evaluative ratings of scenarios, grouped by the manipulated types of reasons. Error bars represent standard errors of the mean. On the x-axis, the number represents different scenarios.

4.2.2. Relationship between manipulated features and evaluative reasoning

As expected, when we manipulated a feature of a scenario, participants became more likely to select that feature as a reason for their evaluation. On average, when we manipulated a feature, participants selected that reason 54% of the time; in contrast, when we did not manipulate a feature, participants only selected the relevant reason 34% of the time. For example, when we manipulated learning, participants selected learning as a reason for their judgment (i.e., ‘Because of how much [protagonist] is learning’) 64% of the time. But in the scenarios that did not manipulate learning, participants only selected learning 33% of the time (Table 6). GLMMsFootnote 1 showed that the effect was significant when we manipulated the features of affect agent, effort, learning, rules, and fairness, ps <.028. The effect was not significant for honesty, p = .739.

Table 6 Summary of scenarios in which participants were more or less likely to choose reasons

a Participants were significantly more likely to select the reason in a scenario with the manipulated feature compared to scenarios that did not manipulate that feature

b Participants were significantly less likely to select the reason in a scenario with the manipulated feature compared to scenarios that did not manipulate that feature

4.3. Discussion

The findings of Study 3 supported our hypothesis that the reasons participants provided in Studies 1 and 2 referenced features that affected participants’ judgments. If students can recognize and articulate the reasons that guide their judgments about cheating events, then changing those events to alter those reasons should change another person’s judgments accordingly. When we manipulated the features that participants from Study 2 referenced in their reasoning, those features influenced participants’ evaluations as hypothesized in Study 3. Participants were also more likely to select the relevant reasons as justifications for their judgments, supporting our interpretation that the manipulated features indeed guided participants’ reasoning.

5. General discussion

This research addressed longstanding debates about the role of moral reasoning in people’s judgments and decisions. According to the reasoning-peripheral view, reasoning about moral principles usually plays little role in what we deem right or wrong, or what we decide to do (Haidt, Reference Haidt2001; Hindriks, Reference Hindriks2015; McHugh et al., Reference McHugh, McGann, Igou and Kinsella2020; Schwitzgebel & Ellis, Reference Schwitzgebel, Ellis, Bonnefon and Trémolière2017). We drew on a situated, reasoning-central framework of moral decision-making to address these ongoing debates in moral psychology (Dahl & Killen, Reference Dahl, Killen, Wixted and Ghetti2018; Killen & Dahl, Reference Killen and Dahl2021; Turiel & Dahl, Reference Turiel, Dahl, Bayertz and Roughley2018). Across three studies, we tested key predictions of the reasoning-central framework to the moral psychology of a real-world issue: college students’ judgments about real cheating events.

5.1. To what extent do people neutralize their past acts of cheating?

Though many students say they believe cheating is generally wrong, some scholars have suggested that students often neutralize such concerns when cheating serves their own interest (Haines et al., Reference Haines, Diekhoff, LaBeff and Clark1986; Rettinger & Kramer, Reference Rettinger and Kramer2009; Stephens, Reference Stephens2017; Sykes & Matza, Reference Sykes and Matza1957). More broadly, psychologists have argued that people seek to reconstrue many of their wrongdoings by making excuses to avoid feeling bad about themselves (Bandura, Reference Bandura2016; Detert et al., Reference Detert, Treviño and Sweitzer2008; Shu et al., Reference Shu, Gino and Bazerman2011). In this framework, reasoning is used to fulfill selfish purposes. The debate pertains to whether reasoning guides people to robustly evaluate cheating as wrong in most cases, or whether it allows them to easily discard the wrongness of cheating when it comes to their own actions.

The present research showed that, rather than absolving themselves of guilt, students acknowledged their own wrongdoing and took responsibility for their cheating actions. Students’ evaluations appeared to have become even more negative since the event, despite having had more time to rationalize their actions. The findings suggest that students generally care about academic integrity and remain morally engaged even when they cheat (Dahl & Waltzer, Reference Dahl and Waltzer2018; Waltzer & Dahl, Reference Waltzer, Dahl, Rettinger and Gallant2022). Participants did not seem to readily bend or turn off their moral principles to see themselves in a positive light. Future research could build on these initial findings by directly measuring feelings of guilt and shame and testing the role of factors such as the type of cheating event, the duration of time that had elapsed since the event, and how other people responded to the act.

5.2. Do people judge others’ acts of cheating more harshly than their own acts?

The second question we addressed was whether people judge the same events more harshly from a third-party point of view (about others’ actions) than from a first-party point of view (about their own actions). Reasoning-peripheral accounts imply that people generally evaluate their own actions favorably by selectively focusing on the more positive aspects of their actions (Ditto et al., Reference Ditto, Pizarro, Tannenbaum, Bartels, Bauman, Skitka and Medin2009; Helzer & Dunning, Reference Helzer, Dunning, Vazire and Wilson2012; Shu et al., Reference Shu, Gino and Bazerman2011; Stanley & De Brigard, Reference Stanley and De Brigard2019). Our reasoning-central account of decision-making predicts that people would use the same features of the events to guide their judgments, regardless of perspective. While many have focused on the differences between first-party and third-party perspectives, the present work revealed striking similarities (Jones & Nisbett, Reference Jones, Nisbett, Jones, Kanouse, Kelley, Nisbett, Valins and Weiner1972; Pronin et al., Reference Pronin, Gilovich and Ross2004). We found that third-party respondents’ perceptions and evaluations of cheating events tended to align with those of the first-party respondents involved in those events. These findings suggest that the specific content of an action and the context in which it occurs are more influential drivers of moral judgments than the perspective of the evaluator.

Prior findings of differences between first- and third-party judgments may have arisen from the fact that people had access to much more information about first-party events than about third-party events. When a person cheats, they are in a unique position with access to more information about the event that outsiders cannot readily access. People’s informational assumptions—their beliefs about the context and nature of the act—likely play a crucial role in people’s moral judgments, above and beyond their personal involvement in the event (Ajzen et al., Reference Ajzen, Joyce, Sheikh and Cote2011; Ajzen & Fishbein, Reference Ajzen, Fishbein, Albarracín, Johnson and Zanna2005; Beck & Ajzen, Reference Beck and Ajzen1991; Schein, Reference Schein2020; Van den Bos, Reference Van den Bos2003; Wainryb, Reference Wainryb1991; Wainryb & Turiel, Reference Wainryb and Turiel1993). In this way, a person may judge their actions more favorably from a first-party perspective because of the information available to them, not due to a self-serving bias.

A handful of studies have focused on cases in which third-party respondents judge an action more positively than the first-party actor (Gold et al., Reference Gold, Pulford and Colman2015; Nadelhoffer & Feltz, Reference Nadelhoffer and Feltz2008). The present findings offered partial support for this notion. Future research needs to directly examine the context to see how the position affords different information to different parties.

5.3. Do people articulate reasons that account for judgments about cheating?

Our third question was whether the reasons people provide as justifications for their judgments are actually the reasons that guide those judgments. Reasoning-peripheral accounts of moral decision-making posit that reasons mostly serve a post-hoc function of making sense of one’s judgment after already forming it (Cushman, Reference Cushman2020; Sharot et al., Reference Sharot, Velasquez and Dolan2010; Vinckier et al., Reference Vinckier, Rigoux, Kurniawan, Hu, Bourgeois-Gironde, Daunizeau and Pessiglione2019). In contrast, our proposed reasoning-central approach places reasons at the forefront of judgments, though we do acknowledge that they are not the only factor guiding judgments (e.g., emotions can also play a role; Dahl et al., Reference Dahl, Martinez, Baxley, Waltzer, Killen and Smetana2023).

The present research offered evidence that reasons guide both first- and third-party moral judgments. Students’ articulated reasons about why cheating events were wrong or acceptable in Studies 1 and 2 did indeed guide judgments about similar scenarios in Study 3. These findings suggested that people can recognize relevant considerations about actions that influence people’s moral evaluations. While prompting participants to share their reasons in an open-ended format represents just one of many converging techniques for assessing moral reasoning, the data offer encouragement that such prompts can generate valid and genuine reasoning data. This can support research-driven efforts to reduce cheating: The reasons students give for why they cheat could be addressed in interventions, and we should expect cheating rates to go down in response (Beasley, Reference Beasley2014; Stephens, Reference Stephens2017). Future research could use an experimental approach to modify the factors students mention in their reasons for cheating and predict whether students opt to cheat when given the opportunity to do so (Bostyn et al., Reference Bostyn, Sevenhant and Roets2018; Gold et al., Reference Gold, Colman and Pulford2014; Waltzer & Dahl, Reference Waltzer, Dahl, Rettinger and Gallant2022). Studying decisions in real-time would also reduce reliance on first-party recall of prior events.

5.4. A reasoning-central approach to moral decision-making

In this paper, we studied cheating to test predictions from the reasoning-central view of moral psychology. The key tenet of this view is that most moral judgments and decisions derive from reasoning about evaluative principles. Such reasoning can vary in speed or conscious awareness; its hallmark is the formation of moral judgments and decisions in accordance with principles people can articulate and endorse (Dahl & Killen, Reference Dahl, Killen, Wixted and Ghetti2018; Killen & Dahl, Reference Killen and Dahl2021; Turiel & Dahl, Reference Turiel, Dahl, Bayertz and Roughley2018). Our findings were consistent with this view.

Situational variability is built into the reasoning-central approach. Although most people deem that dishonesty is generally wrong, judgments about specific acts of dishonesty often require individuals to balance competing principles. All three studies illustrate how participants varied their judgments about cheating in response to reasoning about situational features. Thus, while it can be valuable to ask students about their evaluations of cheating in general (as many researchers do), these general questions should not keep us from recognizing and researching situational variability too (Barnhardt, Reference Barnhardt2016; Davis et al., Reference Davis, Grover, Becker and McGregor1992; McCabe et al., Reference McCabe, Butterfield and Treviño2012; Waltzer et al., Reference Waltzer, Samuelson and Dahl2022; Waltzer & Dahl, Reference Waltzer and Dahl2021; Yachison et al., Reference Yachison, Okoshken and Talwar2018).

Insofar as people’s moral judgments are shaped by principled reasoning, providing more contextual details of cheating events should bring first-party and third-party judgments into even closer alignment than we found here. To test this, a future study could systematically manipulate the amount of contextual detail provided in scenarios and measure the correlation between first- and third-party evaluations. In everyday life, disagreements between people about moral transgressions can arise when one party lacks the full context of the event (Schein, Reference Schein2020; Wainryb, Reference Wainryb1991). One practical example is when instructors need to make decisions about cases of cheating in their classes based on very limited information. Future research could explore how the fact-finding conversation that typically occurs between an instructor and a student who is suspected of cheating may shift the instructor’s judgments about the detected cheating act.

Our findings run counter to the predictions of the reasoning-peripheral view. First, we did not find indications that participants were readily neutralizing or morally disengaging from their own acts of cheating. Studies 1 and 2 showed that the evaluations and reasoning students provided about their own cheating resembled the evaluations and reasoning other students provided about those same actions, even though the latter had no selfish motive to excuse their behaviors. Second, we did not find evidence that the reasons students provided for their judgments were mere post-hoc rationalizations lacking a causal relation to how those judgments were formed. When we manipulated the situational features that students’ reasons referenced in Study 2, participants’ judgments changed accordingly in Study 3: The things that students claimed to reason about did indeed affect judgments on the same scenarios.

5.5. A paradigm for future research on moral reasoning

Our conclusions are tempered by our present focus on cheating among samples of undergraduate students. To support firmer conclusions about the role of reasoning in moral functioning more broadly, studies will need to sample from more diverse populations and types of situations. This could include a range of transgressive behaviors in common society, such as littering, shoplifting, and driving over the speed limit. The present sequence of studies offers a paradigm for conducting such research by (1) eliciting first-person descriptions, evaluations, and reasoning about morally relevant actions; (2) eliciting third-person evaluations and reasoning about those same actions; and (3) manipulating situational features referenced by participants’ reasoning.

To assess the relative merits of the reasoning-central and reasoning-peripheral views, it will also be essential to adopt the present paradigm for event types that have played a major role in prior theoretical debates. These include various purity violations and trolley dilemmas (Greene, Reference Greene2013; Haidt, Reference Haidt2012). Some such events are so unusual or unrealistic that few or no participants will have encountered them. It is safe to assume that participants have not faced trolley dilemmas in real life—not to mention some of the unsavory events depicted in research on purity violations. For these events, questions about neutralization or moral disengagement hardly arise. Instead, the focus turns to questions about whether participants’ stated justifications reflect reasons that shaped their judgments or mere post-hoc rationalizations. In these cases, one could skip step (1), the first-party assessment, and go directly to (2) eliciting third-party evaluations and reasoning before (3) manipulating the features referenced in participant reasoning (for an application of this paradigm to trolley dilemmas, see Dahl et al., Reference Dahl, Gingo, Uttich and Turiel2018).

6. Conclusion

Debates about when and how moral reasoning guides judgments and decisions about cheating have persisted for decades. This paper advances these debates by examining students’ situated reasoning and judgments about real cheating events. To do this, we presented a new methodological approach that draws on people’s lived experiences to inform psychological theory. The findings from this work are encouraging for those who have lived their lives thinking that moral reasoning matters deeply for how we view ourselves and others: Participants did indeed seem to use principled reasoning to inform their moral judgments, even when it might have been in their self-interest to do otherwise. Beyond cheating, the methodological and theoretical approach taken here could be adapted to shed light on other phenomena in which individuals violate general principles of right and wrong (Dahl, Reference Dahl2017; Killen & Dahl, Reference Killen and Dahl2021).

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/jdm.2024.7.

Open science statement

The materials and data for this research are openly available at https://osf.io/fn8ys/.

Acknowledgments

We thank Kevin J. Chen, Charles Baxley, Fiona C. DeBernardi, Max Wechsler-Azen, and other members of the Academic Orientations Project and Developmental Moral Psychology Lab at the University of California, Santa Cruz, for their contributions to the design, data collection, and manuscript preparation.

Author contributions

Waltzer and Dahl conceptualized and designed the studies. Waltzer and Samuelson created the study materials, piloted the studies, and collected the data. Waltzer carried out data analyses, with input from Samuelson and Dahl. Samuelson and Waltzer drafted and revised the manuscript, with feedback and edits from Dahl. Dahl oversaw the project as the supervising faculty member.

Funding statement

This research received no specific grant funding from any funding agency, commercial or not-for-profit sectors.

Competing interest

The authors declare none.

Footnotes

Tal Waltzer and Arvid Samuelson are joint first authors.

1 GLMMs were modeled using the glmer() function from the lme4 package (v1.1.33, Bates et al., Reference Bates, Maechler, Bolker and Walker2015) in R statistical software (v4.3.0, R Core Team, 2021). We ran separate analyses for each combination of the six manipulated features and the six reason types (36 total). The dependent variable was whether a participant selected a given reason (e.g., learning) for a scenario. The fixed effect was whether a scenario manipulated a given feature (e.g., learning). Models included random intercepts for participants and scenarios. Hypotheses were tested using likelihood-ratio tests. The likelihood-ratio test statistic, which represents the change in model deviance between the full and restricted models, was compared to a chi-squared distribution with degrees of freedom equal to the difference in the number of estimated parameters in the full and restricted models (Hox, Reference Hox2010).

This is the syntax used for each model:

glmer(reason ~ (1|id) + (1|scenario) + I(feature == f), data = dta, family = “binomial”)

reason = binary value for whether or not the participant selected the reason (e.g., learning); id = identifier for each participant; scenario = one of 16 scenarios read by participants; f = whether the scenario manipulated the feature or not (e.g., learning scenarios vs. all other scenarios); dta = a dataframe with one row per participant for each scenario.

References

Adler, J. E., & Rips, L. J. (Eds.) (2008). Reasoning: Studies of human inference and its foundations. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Ajzen, I., & Fishbein, M. (2005). The influence of attitudes on behavior. In Albarracín, D., Johnson, B. T., & Zanna, M. P. (Eds.), The handbook of attitudes (pp. 173221). Lawrence Erlbaum Associates Publishers.Google Scholar
Ajzen, I., Joyce, N., Sheikh, S., & Cote, N. G. (2011). Knowledge and the prediction of behavior: The role of information accuracy in the theory of planned behavior. Basic and Applied Social Psychology, 33(2), 101117. https://doi.org/10.1080/01973533.2011.568834 CrossRefGoogle Scholar
Ariely, D. (2012). The honest truth about dishonesty: How we lie to everyone—especially ourselves. New York, NY: Harper Collins.Google Scholar
Bandura, A. (2002). Selective moral disengagement in the exercise of moral agency. Journal of Moral Education, 31, 101119. https://doi.org/10.1080/0305724022014322 CrossRefGoogle Scholar
Bandura, A. (2016). Moral disengagement: How people do harm and live with themselves. New York: Macmillan Higher Education.Google Scholar
Barnhardt, B. (2016). The “epidemic” of cheating depends on its definition: A critique of inferring the moral quality of “cheating in any form.” Ethics & Behavior, 26(4), 330343. https://doi.org/10.1080/10508422.2015.1026595 CrossRefGoogle Scholar
Bates, D., Maechler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 148. https://doi.org/10.18637/jss.v067.i01 CrossRefGoogle Scholar
Batson, C. D. (2017). Getting cynical about character: A social-psychological perspective. In Sinnott-Armstrong, W. & Miller, C. B. (Eds.), Moral psychology: Virtue and character (Vol. 5, pp. 1144). Boston Review. https://doi.org/10.2307/j.ctt1n2tvzm.5 CrossRefGoogle Scholar
Batson, C. D., Kobrynowicz, D., Dinnerstein, J. L., Kampf, H. C., & Wilson, A. D. (1997). In a very different voice: Unmasking moral hypocrisy. Journal of Personality and Social Psychology, 72(6), 13351348. https://doi.org/10.1037/0022-3514.72.6.1335 CrossRefGoogle Scholar
Beasley, E. M. (2014). Students reported for cheating explain what they think would have stopped them. Ethics & Behavior, 24(3), 229252. https://doi.org/10.1080/10508422.2013.845533 CrossRefGoogle Scholar
Beck, L., & Ajzen, I. (1991). Predicting dishonest actions using the theory of planned behavior. Journal of Research in Personality, 25(3), 285301. https://doi.org/10.1016/0092-6566(91)90021-H CrossRefGoogle Scholar
Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88(1), 145. https://doi.org/10.1037/0033-2909.88.1.1 CrossRefGoogle Scholar
Bok, S. (1978). Lying: Moral choice in public and private life. New York: Pantheon.Google Scholar
Bostyn, D. H., Sevenhant, S., & Roets, A. (2018). Of mice, men, and trolleys: Hypothetical judgment versus real-life behavior in trolley-style moral dilemmas. Psychological Science, 29(7), 10841093. https://doi.org/10.1177/0956797617752640 CrossRefGoogle ScholarPubMed
Bouville, M. (2010). Why is cheating wrong? Studies in Philosophy and Education, 29(1), 6776. https://doi.org/10.1007/s11217-009-9148-0 CrossRefGoogle Scholar
Bretag, T. (Ed.). (2020). A research agenda for academic integrity. Edward Elgar Publishing.CrossRefGoogle Scholar
Brown, D. L. (2002). Cheating must be okay – everybody does it! Nurse Educator, 27, 68.CrossRefGoogle Scholar
Campbell, R., & Kumar, V. (2012). Moral reasoning on the ground. Ethics, 122(2), 273312. https://doi.org/10.1086/663980 CrossRefGoogle Scholar
Cizek, G. J. (2003). Detecting and preventing classroom cheating: Promoting integrity in assessment. Thousand Oaks: Corwin Press.Google Scholar
Cushman, F. (2020). Rationalization is rational. Behavioral and Brain Sciences, 43, E28. https://doi.org/10.1017/S0140525X19001730 CrossRefGoogle Scholar
Dahl, A. (2017). Ecological commitments: Why developmental science needs naturalistic methods. Child Development Perspectives, 11, 7984. https://doi.org/10.1111/cdep.12217 CrossRefGoogle ScholarPubMed
Dahl, A., Gingo, M., Uttich, K., & Turiel, E. (2018). Moral reasoning about human welfare in adolescents and adults: Judging conflicts involving sacrificing and saving lives. Monographs of the Society for Research in Child Development, 83(3), 1133. https://doi.org/10.1111/mono.12374 Google Scholar
Dahl, A., & Killen, M. (2018). Moral reasoning: Theory and research in developmental science. In Wixted, J. T. & Ghetti, S. (Eds.), The Stevens’ handbook of experimental psychology and cognitive neuroscience (4th ed., Vol. 4, pp. 323353). New York, NY: Wiley.Google Scholar
Dahl, A., Martinez, M. G. S., Baxley, C., & Waltzer, T. (2023). Early moral development: Four phases of construction through social interactions. In Killen, M. & Smetana, J. G. (Eds.), Handbook of moral development (3rd ed., pp. 135152). Routledge.Google Scholar
Dahl, A., & Waltzer, T. (2018). Moral disengagement as a psychological construct. American Journal of Psychology, 131, 240246. https://doi.org/10.5406/amerjpsyc.131.2.0240 CrossRefGoogle Scholar
Dahl, A., & Waltzer, T. (2020). Rationalization is rare, reasoning is pervasive. Behavioral and Brain Sciences, 43, e33. https://doi.org/10.1017/S0140525X19002140 CrossRefGoogle ScholarPubMed
Darwall, S. (2009). The second-person standpoint: Morality, respect, accountability. Harvard University Press.CrossRefGoogle Scholar
Davis, S. F., Grover, C. A., Becker, A. H., & McGregor, L. N. (1992). Academic dishonesty: Prevalence, determinants, techniques, and punishments. Teaching of Psychology, 19, 1620.CrossRefGoogle Scholar
DeBernardi, F. C., Waltzer, T., & Dahl, A. (2021). Cheating contextualized: How academic pressures lead to moral exceptions. Talk presented at the Association for Moral Education 47th Annual Conference, Virtual.Google Scholar
DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70(5), 979995. https://doi.org/10.1037/0022-3514.70.5.979 CrossRefGoogle ScholarPubMed
Detert, J. R., Treviño, L. K., & Sweitzer, V. L. (2008). Moral disengagement in ethical decision making: A study of antecedents and outcomes. Journal of Applied Psychology, 93(2), 374391. https://doi.org/10.1037/0021-9010.93.2.374 CrossRefGoogle ScholarPubMed
Ditto, P. H., Pizarro, D. A., & Tannenbaum, D. (2009). Motivated moral reasoning. In Bartels, D. M., Bauman, C. W., Skitka, L. J., & Medin, D. L. (Eds.), Moral judgment and decision making (Vol. 50, pp. 307338). Elsevier Academic Press. https://doi.org/10.1016/S0079-7421(08)00410-6 Google Scholar
Drake, C. A. (1941). Why students cheat. Journal of Higher Education, 12, 418420. https://doi.org/10.2307/1976003 CrossRefGoogle Scholar
Erat, S., & Gneezy, U. (2012). White lies. Management Science, 58(4), 723733. https://doi.org/10.1287/mnsc.1110.1449 CrossRefGoogle Scholar
Fleischhut, N., Meder, B., & Gigerenzer, G. (2017). Moral hindsight. Experimental Psychology, 64(2), 110123. https://doi.org/10.1027/1618-3169/a000353 CrossRefGoogle ScholarPubMed
Gold, N., Colman, A. M., & Pulford, B. D. (2014). Cultural differences in responses to real-life and hypothetical trolley problems. Judgment and Decision making, 9(1), 6576.CrossRefGoogle Scholar
Gold, N., Pulford, B. D., & Colman, A. M. (2015). Do as I say, don’t do as I do: Differences in moral judgments do not translate into differences in decisions in real-life trolley problems. Journal of Economic Psychology, 47, 5061. https://doi.org/10.1016/j.joep.2015.01.001 CrossRefGoogle Scholar
Graham, J. (2014). Morality beyond the lab. Science, 345(6202), 1242. https://doi.org/10.1126/science.1259500 CrossRefGoogle ScholarPubMed
Greene, J. (2013). Moral tribes: Emotion, reason, and the gap between us and them. Penguin.Google Scholar
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814834. https://doi.org/10.1037/0033-295X.108.4.814 CrossRefGoogle ScholarPubMed
Haidt, J. (2007). The new synthesis in moral psychology. Science, 316(5827), 9981002. https://doi.org/10.1126/science.1137651 CrossRefGoogle ScholarPubMed
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage.Google Scholar
Haines, V. J., Diekhoff, G. M., LaBeff, E. E., & Clark, R. E. (1986). College cheating: Immaturity, lack of commitment, and the neutralizing attitude. Research in Higher Education, 25(4), 342354. https://doi.org/10.1007/BF00992130 CrossRefGoogle Scholar
Harman, G. (1986). Change in view: Principles of reasoning. The MIT Press.Google Scholar
Hartshorne, H., & May, M. A. (1928). Studies in the nature of character: Volume I. Studies in deceit. New York: Macmillan.Google Scholar
Helzer, E. G., & Dunning, D. (2012). On motivated reasoning and self-belief. In Vazire, S. & Wilson, T. D. (Eds.), Handbook of self-knowledge (pp. 379396). The Guilford Press.Google Scholar
Hindriks, F. (2015). How does reasoning (fail to) contribute to moral judgment? Dumbfounding and disengagement. Ethical Theory and Moral Practice, 18(2), 237250. https://doi.org/10.1007/s10677-015-9575-7 CrossRefGoogle Scholar
Ho, B. (2021). Why trust matters: An economist’s guide to the ties that bind us. Columbia University Press.CrossRefGoogle Scholar
Hofmann, W., Wisneski, D. C., Brandt, M. J., & Skitka, L. J. (2014). Morality in everyday life. Science, 345(6202), 13401343. https://doi.org/10.1126/science.1251560 CrossRefGoogle ScholarPubMed
Hox, J. (2010). Multilevel analysis: Techniques and applications (2nd ed.). New York, NY: Routledge.CrossRefGoogle Scholar
Hume, D. (1739). A treatise of human nature. Clarendon Press.Google Scholar
Jacobson, D. (2012). Moral dumbfounding and moral stupefaction. In Timmons, M. (Ed.), Oxford studies in normative ethics (Vol. 2, pp. 289316). Oxford University Press.CrossRefGoogle Scholar
Jensen, L. A., Arnett, J. J., Feldman, S. S., & Cauffman, E. (2002). It’s wrong, but everybody does it: Academic dishonesty among high school and college students. Contemporary Educational Psychology, 27(2), 209228. https://doi.org/10.1006/ceps.2001.1088 CrossRefGoogle Scholar
Jones, E. E., & Nisbett, R. E. (1972). The actor and the observer: Divergent perceptions of the cause of behavior. In Jones, E. E., Kanouse, D. E., Kelley, H. H., Nisbett, R. E., Valins, S., & Weiner, B. (Eds.), Attribution: Perceiving the causes of behavior (pp. 7994). Morristown, NJ: General Learning Press.Google Scholar
Jordan, A. H., & Monin, B. (2008). From sucker to saint: Moralization in response to self-threat. Psychological Science, 19(8), 809815. https://doi.org/10.1111/j.1467-9280.2008.02161.x CrossRefGoogle ScholarPubMed
Killen, M., & Dahl, A. (2018). Moral judgment: Reflective, interactive, spontaneous, challenging, and always evolving. In Gray, K., & Graham, J. (Eds.), Moral atlas (pp. 2030). Guilford Press.Google Scholar
Killen, M., & Dahl, A. (2021). Moral reasoning enables developmental and societal change. Perspectives on Psychological Science, 16(6), 12091225. https://doi.org/10.1177/1745691620964076 CrossRefGoogle ScholarPubMed
Kohlberg, L. (1971). From is to ought: How to commit the naturalistic fallacy and get away with it in the study of moral development. In Mischel, T. (Ed.), Cognitive development and epistemology (pp. 101189). Academic Press.Google Scholar
Kolb, R. W. (2008). Universalizability, principle of. In Encyclopedia of business ethics and society (Vol. 1, pp. 21462148). SAGE Publications, https://doi.org/10.4135/9781412956260.n829 CrossRefGoogle Scholar
Krebs, D. L., & Laird, P. G. (1998). Judging yourself as you judge others: Moral development and exculpation. Journal of Adult Development, 5, 112.CrossRefGoogle Scholar
Landy, J. F., & Royzman, E. B. (2018). The moral myopia model: Why and how reasoning matters in moral judgment. In Pennycook, G. (Ed.), The new reflectionism in cognitive psychology (pp. 7698). Routledge.Google Scholar
Levine, T. R. (2014). Truth-default theory (TDT): A theory of human deception and deception detection. Journal of Language and Social Psychology, 33(4), 378392. https://doi.org/10.1177/0261927X14535916 CrossRefGoogle Scholar
Levine, T. R., Kim, R. K., & Hamel, L. R. (2010). People lie for a reason: Three experiments documenting the principle of veracity. Communication Research Reports, 27(4), 271285. https://doi.org/10.1080/08824096.2010.496334 CrossRefGoogle Scholar
McCabe, D. L. (1997). Classroom cheating among natural science and engineering majors. Science and Engineering Ethics, 3, 433445.CrossRefGoogle Scholar
McCabe, D. L., Butterfield, K. D., & Treviño, L. K. (2012). Cheating in college: Why students do it and what educators can do about it. Baltimore: The Johns Hopkins University Press.Google Scholar
McCall, R. B. (1977). Challenges to a science of developmental psychology. Child Development, 48(2), 333344. https://doi.org/10.2307/1128626 CrossRefGoogle Scholar
McHugh, C., McGann, M., Igou, E. R., & Kinsella, E. L. (2020). Reasons or rationalizations: The role of principles in the moral dumbfounding paradigm. Journal of Behavioral Decision Making, 33(3), 376392.CrossRefGoogle Scholar
McHugh, M. L. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22, 276282.CrossRefGoogle ScholarPubMed
Mikula, G., Petri, B., & Tanzer, N. (1990). What people regard as unjust: Types and structures of everyday experiences of injustice. European Journal of Social Psychology, 20(2), 133149. https://doi.org/10.1002/ejsp.2420200205 CrossRefGoogle Scholar
Miller, A., Shoptaugh, C., & Wooldridge, J. (2011). Reasons not to cheat, academic-integrity responsibility, and frequency of cheating. Journal of Experimental Education, 79, 169184.CrossRefGoogle Scholar
Murdock, T. B., Stephens, J. M., & Grotewiel, M. M. (2016). Student dishonesty in the face of assessment: Who, why, and what we can do about it. In Brown, G. T. & Harris, L. (Eds.), Handbook of human factors and social conditions in assessment (pp. 186203). London, UK: Routledge.Google Scholar
Nadelhoffer, T., & Feltz, A. (2008). The actor-observer bias and moral intuitions: Adding fuel to Sinnott–Armstrong’s fire. Neuroethics, 1(2), 133144.CrossRefGoogle Scholar
Nucci, L., & Gingo, M. (2011). The development of moral reasoning. In Goswami, U. (Ed.), The Wiley-Blackwell handbook of childhood cognitive development (pp. 420444). Wiley-Blackwell.Google Scholar
Nucci, L., Turiel, E., & Roded, A. D. (2017). Continuities and discontinuities in the development of moral judgments. Human Development, 60(6), 279341. https://doi.org/10.1159/000484067 CrossRefGoogle Scholar
Oliner, S. P., & Oliner, P. M. (1988). The altruistic personality: Rescuers of Jews in Nazi Europe. Free Press.Google Scholar
Pronin, E., Gilovich, T., & Ross, L. (2004). Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychological Review, 111, 781799. https://doi.org/10.1037/0033-295X.111.3.781.CrossRefGoogle ScholarPubMed
R Core Team (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.Google Scholar
Rawls, J. (1971). A theory of justice. Harvard University Press.CrossRefGoogle Scholar
Rettinger, D. A., & Kramer, Y. (2009). Situational and personal causes of student cheating. Research in Higher Education, 50, 293313. https://doi.org/10.1007/s11162-008-9116-5 CrossRefGoogle Scholar
Royzman, E. B., Kim, K., & Leeman, R. F. (2015). The curious tale of Julie and Mark: Unraveling the moral dumbfounding effect. Judgment and Decision Making, 10(4), 296313.CrossRefGoogle Scholar
Scanlon, T. M. (1998). What we owe to each other. Harvard University Press.Google Scholar
Schein, C. (2020). The importance of context in moral judgments. Perspectives on Psychological Science, 15(2), 207215. https://doi.org/10.1177/1745691620904083 CrossRefGoogle ScholarPubMed
Schwitzgebel, E., & Ellis, J. (2017). Rationalization in moral and philosophical thought. In Bonnefon, J. & Trémolière, B. (Eds.), Moral inferences (pp. 178198). Psychology Press. https://doi.org/10.4324/9781315675992 Google Scholar
Serota, K. B., & Levine, T. R. (2015). A few prolific liars: Variation in the prevalence of lying. Journal of Language and Social Psychology, 34(2), 138157. https://doi.org/10.1177/0261927X14528804 CrossRefGoogle Scholar
Serota, K. B., Levine, T. R., & Docan-Morgan, T. (2022). Unpacking variation in lie prevalence: Prolific liars, bad lie days, or both? Communication Monographs, 89(3), 307331. https://doi.org/10.1080/03637751.2021.1985153 CrossRefGoogle Scholar
Sharot, T., Velasquez, C. M., & Dolan, R. J. (2010). Do decisions shape preference? Evidence from blind choice. Psychological Science, 21(9), 12311235. https://doi.org/10.1177/0956797610379235 CrossRefGoogle ScholarPubMed
Shu, L. L., Gino, F., & Bazerman, M. H. (2011). Dishonest deed, clear conscience: When cheating leads to moral disengagement and motivated forgetting. Personality and Social Psychology Bulletin, 37(3), 330349. https://doi.org/10.1177/0146167211398138 CrossRefGoogle ScholarPubMed
Stanley, M. L., & De Brigard, F. (2019). Moral memories and the belief in the good self. Current Directions in Psychological Science, 28(4), 387391. https://doi.org/10.1177/0963721419847990 CrossRefGoogle Scholar
Stephens, J. M. (2017). How to cheat and not feel guilty: Cognitive dissonance and its amelioration in the domain of academic dishonesty. Theory Into Practice, 56, 111120. https://doi.org/10.1080/00405841.2017.1283571 CrossRefGoogle Scholar
Sykes, G. M., & Matza, D. (1957). Techniques of neutralization: A theory of delinquency. American Sociological Review, 22, 664670.CrossRefGoogle Scholar
Turiel, E. (2002). The culture of morality: Social development, context, and conflict. Cambridge, UK: Cambridge University Press.Google Scholar
Turiel, E. (2003). Morals, motives, and actions. In Smith, L., Rogers, C., & Tomlinson, P. (Eds.), Development and motivation: Joint perspectives, British Journal of Educational Psychology: Monograph Series II, Number 2 (pp. 2940). British Psychological Society.Google Scholar
Turiel, E., & Dahl, A. (2018). The development of domains of moral and conventional norms, coordination in decision-making, and the implications of social opposition. In Bayertz, K. & Roughley, N. (Eds.), The normative animal? On the anthropological significance of social, moral, and linguistic norms. Oxford University Press.Google Scholar
Valdesolo, P., & DeSteno, D. (2007). Moral hypocrisy: Social groups and the flexibility of virtue. Psychological Science, 18(8), 689690. https://doi.org/10.1111/j.1467-9280.2007.01961.x CrossRefGoogle ScholarPubMed
Van den Bos, K. (2003). On the subjective quality of social justice: The role of affect as information in the psychology of justice judgments. Journal of Personality and Social Psychology, 85, 482498.CrossRefGoogle ScholarPubMed
Vinckier, F., Rigoux, L., Kurniawan, I., Hu, C., Bourgeois-Gironde, S., Daunizeau, J., & Pessiglione, M. (2019) Sour grapes and sweet victories: How actions shape preferences. PLoS Computational Biology, 15(1), e1006499. https://doi.org/10.1371/journal.pcbi.1006499 CrossRefGoogle ScholarPubMed
Wainryb, C. (1991). Understanding differences in moral judgments: The role of informational assumptions. Child Development, 62(4), 840851. https://doi.org/10.2307/1131181 CrossRefGoogle ScholarPubMed
Wainryb, C., & Turiel, E. (1993). Conceptual and informational features in moral decision making. Educational Psychologist, 28(3), 205218. https://doi.org/10.1207/s15326985ep2803_2 CrossRefGoogle Scholar
Waltzer, T., & Dahl, A. (2021). Students’ perceptions and evaluations of plagiarism: Effects of text and context. Journal of Moral Education, 50(4), 436451. http://doi.org/10.1080/03057240.2020.1787961 CrossRefGoogle Scholar
Waltzer, T., & Dahl, A. (2022). The moral puzzle of academic cheating: Perceptions, evaluations, and decisions. In Rettinger, D. A. & Gallant, T. B. (Eds.), Cheating academic integrity: Lessons from 30 years of research (pp. 99130). Wiley/Jossey Bass.Google Scholar
Waltzer, T., & Dahl, A. (2023). Why do students cheat? Perceptions, evaluations, and motivations. Ethics & Behavior, 33(2), 130150. https://doi.org/10.1080/10508422.2022.2026775 CrossRefGoogle Scholar
Waltzer, T., Samuelson, A., & Dahl, A. (2022) Students’ reasoning about whether to report when others cheat: Conflict, confusion, and consequences. Journal of Academic Ethics, 20, 265287. https://doi.org/10.1007/s10805-021-09414-4 CrossRefGoogle Scholar
Yachison, S., Okoshken, J., & Talwar, V. (2018). Students’ reactions to a peer’s cheating behaviorJournal of Educational Psychology110, 747763.CrossRefGoogle Scholar
Yardley, J., Rodriguez, M. D., Bates, S. C., & Nelson, J. (2009). True confessions? Alumni’s retrospective reports on undergraduate cheating behaviors. Ethics and Behavior, 19, 114.CrossRefGoogle Scholar
Zhao, L., Heyman, G. D., Chen, L., Sun, W., Zhang, R., & Lee, K. (2019). Cheating in the name of others: Offering prosocial justifications promotes unethical behavior in young children. Journal of Experimental Child Psychology, 177, 187196. https://doi.org/10.1016/j.jecp.2018.08.006 CrossRefGoogle ScholarPubMed
Figure 0

Table 1 Summary of central questions and hypotheses across Studies 1, 2, and 3

Figure 1

Table 2 Prompts used in structured interview about personal experience in Study 1

Figure 2

Table 3 Summary of reason coding scheme used in Study 2

Figure 3

Figure 1 Summary of reasons provided by first-party respondents and third-party respondents, grouped by whether they were justifications for okay or not okay judgments. Standardized as proportions of all reason categories mentioned.

Figure 4

Table 4 Examples of stimuli phrases used in Study 3 and frequencies from Study 2

Figure 5

Table 5 Evaluative prompts used in Study 3, presented following each scenario

Figure 6

Figure 2 Judgments about whether the action was okay, grouped by the manipulated types of reasons for or against the act in each scenario. On the x-axis, the number represents different scenarios (e.g., there were four different affect agent events).

Figure 7

Figure 3 Evaluative ratings of scenarios, grouped by the manipulated types of reasons. Error bars represent standard errors of the mean. On the x-axis, the number represents different scenarios.

Figure 8

Table 6 Summary of scenarios in which participants were more or less likely to choose reasons

Supplementary material: File

Waltzer et al. supplementary material

Waltzer et al. supplementary material
Download Waltzer et al. supplementary material(File)
File 996.7 KB