Hostname: page-component-78c5997874-fbnjt Total loading time: 0 Render date: 2024-11-13T04:49:17.510Z Has data issue: false hasContentIssue false

The Deceiving Game

Published online by Cambridge University Press:  16 August 2021

SHLOMO COHEN
Affiliation:
BEN-GURION UNIVERSITY OF THE NEGEVshlomoe@bgu.ac.il
RO'I ZULTAN
Affiliation:
BEN-GURION UNIVERSITY OF THE NEGEVzultan@bgu.ac.il
Rights & Permissions [Opens in a new window]

Abstract

The moral comparison of the three venues of deception—lying, falsely implicating, and nonverbal deception—is a central, ongoing debate in the ethics of deception. To date there has been no attempt to advance in the debate through experimental philosophy. Using methods of experimental economics, we devised a strategic game to test positions in the debate. Our article presents the experimental results and shows how philosophical analysis of the results allows drawing valid normative conclusions. Our conclusions testify against the dominant position in the debate—that lying is morally worse than all non-lying deceptions. They offer prima facie support to the view that the venue of deception makes no moral difference.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press on behalf of the American Philosophical Association

One of the central questions debated in the ethics of deception involves the moral comparison of different types of deception, based on the form (mode, venue) of communication employed. The three forms of deception that comprise (at least) the vast majority of deceptions are (1) lying, or asserting falsehoods;Footnote 1 (2) falsely implicating, or communicating truths that in a given context will predictably cause false beliefs; and (3) nonverbal deception or nonverbal action whose predicted interpretation is intended to create false beliefs.Footnote 2 The debate, then, is whether it makes a moral difference how one deceives, given the different forms of deception.

The leading position in this debate has been that lying is morally worse than the other forms of deception. This view has a rich cultural history, it was held by such great thinkers as Augustine, Aquinas, and Kant, and it continues to be the prominent view among contemporary philosophers (Chisholm and Feehan Reference Chisholm and Feehan1977; Bok Reference Bok1989; Adler Reference Adler1997; Strudler Reference Strudler2010; Webber Reference Webber2013; Shiffrin Reference Shiffrin2014; Berstler Reference Berstler2019). We refer to this position as the classical view. Justifications for the classical view include the idea that others only have a right to the truth vis-à-vis what one asserts, that lying to a person's face is more disrespectful and shameless, that lying entails a greater loss of credibility as communicator, and more. A second important position is that it makes no difference morally how one deceives (Williams Reference Williams2002; Saul Reference Saul2012a, Reference Saul2012b). This can be readily understood as a particular application of the widely held moral intuition that what determines moral wrongness is some function of intention and consequences only, so that if the intent to deceive is the same and the result (the false belief created) is the same, moral evaluation must be similar. We refer to this position as the equivalence thesis. Jonathan Adler succinctly sums up the dilemma of the moral comparison between the classical view and the equivalence thesis: ‘From one angle, there is no moral difference. If you are going to mislead, just go ahead and lie. From another angle, the [non-lying] deceiver does manage to avoid a far worse wrong, even if his means are tainted’ (Reference Adler1997: 446). Against the benchmark moral intuition that the precise manner of wronging someone is morally irrelevant as such, the aim of our essay is to check whether lying indeed constitutes ‘a far worse wrong’.Footnote 3

There has to date been no attempt to move forward in this debate using empirical methods. This may be anything but surprising: given that it is a normative debate between two moral positions, it would seem wrongheaded to attempt to solve it in the lab. Normative debates cannot be reduced to analysis in descriptive terms only (transgressing against this is known as the naturalistic fallacy). Nonetheless, we argue that empirical evidence can legitimately inform the normative debate in meaningful and even decisive ways. We support this argument by describing an experiment that we devised, and then explaining in detail the significant ways in which it allows to make headway in the normative debate.

The project we describe here is fundamentally different from that of experimental work on deception and dishonesty in the social sciences (for meta-analyses on those, see Abeler, Nosenzo, and Raymond Reference Abeler, Nosenzo and Raymond2019; Gerlach, Teordescu, and Hertwig Reference Gerlach, Teodorescu and Hertwig2019). The objective in those studies is, roughly, to detect existing norms against deception, and to examine the various factors that influence compliance with them. Our objective, in contrast, is to adjudicate normatively between moral positions, that is, to exercise moral judgment regarding the relative rightness or wrongness of the given positions. This is a completely different undertaking. Hence, the experimental results that we present are not in themselves our aim; they are input in the service of the normative analysis that follows, which constitutes the heart of our endeavor.Footnote 4

Beyond the employment of empirical methods, our approach to the problem of comparing morally the forms of deception is innovative in a second methodological sense too. The growing field of experimental moral philosophy has not yet tapped into the resources offered by experimental economics,Footnote 5 as far as using them as tools for normative inferences. Methods of experimental economics have been used in experimental moral philosophy (for example, Bicchieri Reference Bicchieri2006) to investigate descriptive/comparative ethics, that is, social norms, not, however, to investigate normative/prescriptive ethics, that is, to establish moral rightness and wrongness. Skepticism about the prospects of common questionnaire-type methods of experimental philosophy to allow normative inferences about the moral dilemma regarding modes of deception led us to an alternative paradigm: we devised a strategic game to test this question experimentally. While questionnaire studies survey people's moral views, we constructed a situation where participants are incentivized to use deception, enabling us to learn about people's actual behaviors. As our discussion will make clear, this experimental approach allows study of the implicit empirical assumptions (‘commitments’) of the different normative views; it consequently allows to proceed much further in gaining normative insights from empirical results.Footnote 6 Indeed, a second main objective of this essay—beyond contributing to the debate on the moral comparison of forms of deception—is to demonstrate with respect to a concrete problem, how far empirically informed moral reasoning can advance, while respecting the logical constraint of the descriptive-normative gap.

1. Experimental Scheme: The Deceiving Game

The Deceiving Game is a strategic game that takes the form of a financial consulting interaction. One player, the investor, chooses how much out of an endowment of 100 points to invest in a virtual project. The project either pays a return of 250 percent on the investment, or loses the investment altogether. The other player—the consultant—has access to certain information about the odds of the project's success, and can advise the investor. The consultant receives remuneration equal to the latter's investment, and thus has an incentive to misrepresent information that conditions for investment are unfavorable.

We implemented the Deceiving Game as follows. A computer program selects one of two urns randomly to determine the outcome of the investment. The blue urn contains three blue and two orange balls, while the orange urn contains three orange and two blue balls. The selected urn represents the state of the world. The project succeeds if and only if the blue urn was chosen by the computer (a straightforward interpretation would be that the blue and orange urns represent bullish and bearish markets, respectively).

The consultant is shown three balls drawn from the chosen urn. These always include one blue and one orange ball, with the third ball randomly picked from the remaining three balls. A consultant who applies Bayes's rule after observing two blue and one orange balls assigns a probability of two thirds to the chosen urn being the blue urn, and similarly for the orange urn when observing two orange and one blue ball. Since the consultant's payoff equals the amount invested by the investor, the consultant has an incentive to persuade the investor that she observed two blue and one orange balls, even if she did not.

Next, the consultant must choose one of two communication options. These vary across three experimental conditions, corresponding to the three modes of deception outlined above:

  • In the Lies condition, the consultant chooses whether to send to the investor the message ‘I saw two blue balls’ or ‘I saw two orange balls’.

  • In the Falsely Implicating condition, the consultant chooses between sending ‘I saw blue’ or ‘I saw orange’. Because (per the game's design) the consultant always observes at least one blue and one orange ball, both messages are always literally true. The former, however, implicates that a majority of blue balls was observed.

  • In the Nonverbal Deception condition, the consultant chooses whether to place a small bet of five points, which pays ten points if and only if the chosen urn is the blue urn, or not to place a bet. This choice is (known to be) revealed to the investor, who may be expected to draw conclusions accordingly.

We refer to the first option in each condition, which communicates having observed a majority of blue balls, as a BLUE message, and to the second option as an ORANGE message.

A general methodological note is in order. In the Deceiving Game, people choose whether to deceive but not how to deceive, as only one form of deception in available to each participant. Although in real life people are typically free to choose how to deceive, we believe nonetheless that our design provides the cleanest and most direct comparison between the three forms of deception. In the alternative, ‘choice of form of deception’ paradigm, the choice of one form depends not only on the attractiveness of that form, but also on that of the available alternatives. In our design, in contrast, each form is judged independently of the others without cross-contamination of preferences. This independent measurement allows quantifying the willingness to deceive in each form, whereas a choice between forms only stands to inform which form is more attractive (even if merely infinitesimally so). Thus in the latter design we risk losing important information. Moreover, the ‘choice of form of deception’ paradigm is susceptible to demand characteristics. That is, if participants were asked to explicitly choose between forms, their answers would have likely been influenced by what they consider the experimenters to expect from them, thus biasing our results. In a choice whether to deceive, in comparison, we avoid this most central form of bias; decisions then better represent intrinsic preferences regarding form of deception.

Below we describe two independent experiments we conducted that studied behavior in the Deceiving Game, and their results. First, however, we should articulate our experimental hypotheses. In section 3, below, we present and discuss various empirically testable conditions, whose existence would support the classical view (they are ‘empirical commitments’ of the classical view). If these conditions in fact hold, then we would expect: (1) less lying, compared to other forms of deception, and (2) more trust placed in assertions (which potentially can be lies) than in other forms of communication (which potentially can be non-lying deceptions). We operationalize the two expectations in the following hypotheses:

Hypothesis 1: The percentage of choices to deceive by consultants is lower in Lies than in Falsely Implicating and Nonverbal Deception.

Hypothesis 2: The mean difference in investment between receiving a BLUE message and an ORANGE message is higher in Lies than in Falsely Implicating and Nonverbal Deception.

If these hypotheses are confirmed by the experimental results, they provide grounding for normative arguments in favor of the classical view, as we explain below. Conversely, if these hypotheses are not confirmed by experimental results, then this means the conditions that constitute the classical view's empirical commitments do not in fact hold, and then support for the classical view is undermined.

We note that Hypothesis 2 expresses the thought that people assign a higher probability to the message (or the inferred state) being true when the message, if deceptive, is a lie. In line with classical decision theory, best developed and articulated by Leonard Savage (Reference Savage1954), we interpret ‘assigning a higher probability’ as a higher willingness to bet on the outcome with which the probability in question is associated. Simply put, if people prefer a gamble that will give them a desirable outcome on event A over a gamble that will give them the same outcome on event B, we say that they assign a higher probability to event A than to event B. The application to the Deceiving Game is straightforward, where the events in questions are the conditional events ‘The blue urn was chosen (by the computer), given the consultant's action X’, with the content of X varying across conditions.

2. Experiments and Results

2.1 Experiment 1

The first experiment was conducted as a classroom experiment among 139 economics students (78 females and 61 males, mean age 24) at Ben-Gurion University of the Negev. Subjects were randomized into the three experimental conditions, with each subject participating in one condition. The experimenter entered the classroom towards the end of the class and offered a chance to participate in a short experiment for money. The students decided whether to participate before learning the details of the experiment. The experimenter handed out the written instructions for the consultant role, and explained the structure of the game (without referring to the content of the message, as different participants received instructions for different conditions; the instructions referred to ‘sender’ and ‘receiver’ rather than ‘consultant’ and ‘investor’; they did not mention deception or any other morally loaded terms). See appendix 1 for the complete translation.

All participants answered comprehension questions, and made two decisions in the role of the consultant, conditional on observing a majority of blue or a majority of orange balls. For the second part of the experiment, the instructions simply indicated that the roles are reversed, and instructed the participants to make two more decisions, now in the role of the investor, conditional on receiving a BLUE or an ORANGE message. After collecting all decision forms, we randomly assigned the participants in pairs of consultant and investor to calculate payoffs. Payoffs were stated in New Israeli Shekels (NIS), and paid out in class a week after the experiment took place; we contacted participants who were not in attendance to arrange payment separately.

This experimental design allowed us to measure two central variables of interest. First, the tendency to deceive in each of the conditions was measured as the proportion of consultants who sent the BLUE message after observing a majority of orange balls. Second, the level of trust of the investors in the messages sent by the consultants was measured as the increase in their investments after receiving the BLUE message compared to their investment after receiving the ORANGE message.

2.1.1 Results of Experiment 1

Were consultants less likely to deceive when deception involved a lie, thus providing support for the classical view? The left panel of figure 1 presents the proportion of consultants who chose to send the BLUE message when observing one blue and two orange balls. We see that deception rates are, if at all, higher in the Lies condition, with 31 of 48 (64.6 percent) participants choosing to deceive compared to 24 of 47 (51.1 percent) and 25 of 44 (56.8 percent) in the Falsely Implicating and Nonverbal Deception conditions, respectively. The differences between the three conditions are not significant (χ2(2) = 1.79, p = 0.408). The proportion of participants who choose to deceive in the Lies condition is 10.7 percentage points higher than in Falsely Implicating and Nonverbal Deception combined, with a 95 percent confidence interval of [−27.7,6.2] (Koopman Reference Koopman1984). That is, we can significantly reject the hypothesis that deception rates in the Lies condition are lower than in the other two conditions by 6.3 percentage points or more.

Figure 1. Deception and Trust in Experiment 1

The right panel of figure 1 presents the investors’ reactions to the consultants’ choices. That is, the mean difference in investment between observing a BLUE and an ORANGE message. This was almost identical in Lies and Nonverbal Deception (30.5 points and 30.7 points, respectively) and slightly and nonsignificantly higher in Falsely Implicating (39.3 points; F(2,136) = 0.76, p = 0.468, η2 = 0.011, ω2 = 0.00 for the one-way ANOVA). A non-negligible share of investments are left (/right) censored following an ORANGE (/BLUE) message (25.9 percent and 32.37 percent, respectively). A tobit regression on investment on condition and message and their interaction, censoring at 0 and 100 yields essentially identical results.

2.1.2. Establishing Equivalence

The lack of significant evidence in support of the classical view is not sufficient, in itself, to reject the hypotheses underlying the classical view, as there can always be a small and undetectable effect in the hypothesized direction. Nonetheless, we can test whether—if such effects exist—they are of negligible magnitude at best. For our main analyses we conduct inferiority tests.Footnote 7 That is, we test the null hypothesis that the effect size, as measured by Cohen's d for the difference between Lies and the other two conditions in the predicted direction, is larger than a minimal benchmark, which we set based on effect sizes observed in the relevant psychological literature (Cohen Reference Cohen1988). In appendix 2, we report the full details of the analysis, including results for a more conservative benchmark.

The results of the inferiority tests are significant, at p < .001 for consultants; and p = .001 for investors. Accordingly, we conclude that the mode of deception had no significant effect either on deception or on trust in the direction supporting the classical view. The power to detect the benchmark effect sizes (for both consultants and investors) given our benchmark is 1-β = .781.

2.2 Experiment 2

We ran the second experiment as a laboratory experiment. In addition to allowing us to corroborate the results of the classroom experiment, a laboratory experiment has several advantages. The laboratory setting provided ample time (approximately seventy-five minutes per session) for guaranteeing participants’ understanding of the instructions (thanks to detailed explanation, answering clarification questions, and testing understanding in control questions). Each participant played repeatedly ten times in each role, increasing the statistical power and providing further opportunity for learning to take place and for testing whether experience alters behavior in the game.

The basic game was the Deceiving Game described above. At the beginning of the experiment, participants were randomly allocated to roles of consultant and investor. In each round, the computer randomly (re)matched participants in pairs of consultant and investor within matching groups of eight participants (a standard practice in experimental economics aimed to ensure statistical independence between matching groups). The consultant saw three balls and chose a message by clicking on the message (or betting option) presented on the screen. Next, the matched investor was informed of the consultant's action and chose an investment. At the end of the round, both participants received feedback regarding the chosen urn, the consultant's message, the investment, and their round payoffs. After ten rounds, the roles were reversed for an additional ten rounds. The final payoff in points was the participant's total earnings in five randomly selected rounds out of the 20 rounds. The payoff was converted to NIS at a conversion rate of 100 points = 10 NIS and added to a 15 NIS base fee. The average final payoff was 55.60 NIS (approximately 16 USD). A total of 168 participants were recruited using the web-based system ORSEE (Greiner Reference Greiner2015); the experiment was programmed in zTree (Fischbacher Reference Fischbacher2007).

2.2.1. Results of Experiment 2

Figure 2 presents the results, with 95 percent confidence intervals based on a mixed-effects linear regressions with random effects for participants and robust standard errors clustered on matching groups. The left panel presents the estimates for the proportion of deceptive choices by condition. The right panel presents the estimates for the marginal effect of the message on investment by condition (that is., the difference in mean investments depending on receiving an ORANGE or a BLUEFootnote 8 message as predicted by the regression model). The results corroborate the findings in Experiment 1, with no apparent condition effects. Willingness to deceive and trust in the message are practically identical in Lies and Falsely Implicating. Willingness to deceive is lower in the Nonverbal Deception condition, though the difference is not statistically significant. (This difference may be due to the small cost of deception incurred in this condition.)

Figure 2. Deception and Trust in Experiment 2

As in experiment 1, we conducted inferiority tests of the null hypothesis that there is no nontrivial effect in the direction supporting the classical view. The inferiority test yields highly significant results of p = .002 for consultants and p = .001 for investors. Power is 1-β = .962 for consultants and 1-β = .993 for investors. (Again, see appendix 2 for details.)

2.3. Conclusions from the Two Experiments

Two independent experiments, using different subject pools and protocols, comprising 307 participants who made, in total, 1,819 decisions in each role yielded similar behavior in the different conditions of the Deceiving Game. In particular, we find (a) that people are not less likely to deceive when the only way to do so involves explicit lies—that is, we can reject Hypothesis 1; and (b) that people are not more trusting in explicit messages that, if deceptive, are outright lies—that is, we can reject Hypothesis 2 (in the sense that we can reject the hypotheses of meaningful differences between Lies and the other two conditions).

3. Normative Insights

We now move to the central task of extracting valid normative conclusions from our experiments. (Since both experiments yielded relevantly similar results, the discussion conveniently applies to both.) Prior to this, we should mention that we can also draw traditional conclusions, in terms of descriptive ethics: since we did not find less lying or more trusting behavior in the lying condition than in the other two, we can conclude that the classical view does not represent folk moral commitments (rather, the equivalence thesis seems to reflect them best). While this is an interesting result, the focus of our analysis here is different—it is to draw normative conclusions. The general idea is to identify the ‘empirical commitments’—that is, the empirically testable elements—of the moral principles that are assumed in this debate, to show how these are addressed by our experiments, and to analyze the normative import of the results.

In one trivial sense, descriptive observations are always relevant to moral judgment, viz. in determining whether a moral principle at all applies to the given situation (for example, trivially, the application of ‘murder is wrong’ is relevant only when (roughly) one person intentionally killing another is the issue at hand). In our experiment, however, our task is to adjudicate between two normative positions, that is, to judge which view is more correct, morally speaking; the appeal to empirical facts is hence prima facie suspicious, and therefore interesting.

3.1 Conclusions from Equivalence among Investors across Conditions

In this section, we first present the inference from the experimental finding of equivalence among investors across the three conditions to the normative conclusion; then we discuss methodological and theoretical assumptions that support this inference—that is, a set of insights which, taken together, entail that our results indeed justify deriving moral judgment regarding the relative wrongness among the three modes of deception.

The first argument for the classical view that we consider is that people trust the veracity of assertions more than of other forms of communication, and that therefore lying amounts to a greater betrayal of trust (and is for that reason morally worse).

The similar increases in investments in Lies, Falsely Implicating, and Nonverbal Deception following a BLUE (compared to an ORANGE) signal can be naturally taken to show similar levels of trust/mistrust in the consultants across the conditions. The investors understood perfectly well that the consultants have an incentive to deceive and are therefore more or less likely to do so; they also understood by what means deceptions would be carried out. If the investors' baseline level of trust in the propensity to truthfulness of the consultants and in the veracity of the messages sent by them had been lower in one condition compared to another, they would have been correspondingly more averse to take the risk involved in investing (that is, losing the entire investment), and would have invested on average less. To the extent that the levels of trust of the deceived are similar across the conditions—to which the experimental results indeed testify (see discussion below)—the breach of their trust by the different kinds of deceptions is of similar magnitude. Now, since the wrongness of deception is, ex hypothesi, a function of betrayal of trust (betrayal of trust is a wrong-making feature of deception), and since the design of our experiment allows us to compare betrayals of trust across the three modes of deception, then having found similar levels of betrayal of trust, we are allowed to draw a normative conclusion: our results, which seem to align with the equivalence thesis (similar wrongness across forms of deception), undermine support for the classical view. The pivotal idea here is that since the wrong-making feature is a function of a psychological state (of the investors), it can be assessed empirically.

A general remark about the formal nature of our conclusions is in order. Failing to support a reason for X (the classical view, in our case) is different, logically speaking, from providing a reason against X. That being granted, we should also stress that when X competes against rival positions, undermining X can de facto testify against it, by making it inferior to existing alternative explanations. This point is especially relevant in our case, where the default position onto which we fall back when failing to support the classical view is prima facie in line with the equivalence thesis. In addition, systematically undermining plausible reasons for X can amount to a reason against X, in the sense of rendering X exceedingly implausible (as long as some further reasonable hypothesis is not put forth).

We now turn to the theoretical underpinnings of our inference from experimental results to normative reason. First, we clarify a basic methodological point; then, we explain the theoretical grounds for our analysis in terms of trust.

Notice, importantly, that we are not attempting to derive from the empirical results an answer to the question of whether, as a rule, betraying (justified) trust is morally wrong. We rather accept and presuppose the validity of the—hardly controversial—moral judgment ‘betraying (justified) trust is morally wrong’, but then focus on the empirical dimension that betraying trust has, and attempt to assess and analyze its contribution to solving the normative debate as to which form of deception is morally worse. In other words, while the debate whether lying is or is not a greater moral wrong is indeed a normative one, our approach is to identify a ground-level moral principle that underlies this debate, identify the empirical (psychological) dimensions involved in observing that principle, show how our experiment can measure those empirical elements, and use this to arrive at a moral verdict. Since the intuitive ground-level moral principle is presupposed to be true, deriving a normative conclusion from the empirical investigation does not ultimately transgress against the naturalistic fallacy.

The possibility of drawing normative conclusions that we describe is not unique (which might have made it suspicious). A moral agent has, for instance, moral reason to avoid greater rather than lesser harming of others; yet what in fact constitutes greater or lesser harming is arguably determined (at least partly) by the psychology of people, that is, by what they experience as a greater or lesser drawback to their interests or welfare. In parallel, there is moral reason to avoid greater rather than lesser betrayal of trust, yet that which in fact constitutes greater or lesser betrayal of trust is determined (at least partly) by the psychology of trusting. Since our experiment operationalizes this psychological attitude, it can legitimately inform the normative debate between the classical view and the equivalence thesis, without committing a naturalistic fallacy. (Empirical psychology can validly inform normative debates in more ways, not restricted to the ‘greater than’ form. For instance, when debating between two actions to perform, there is moral reason to prioritize the action that is one's duty over that which is over-demanding and hence supererogatory; yet what counts as over-demanding is determined, at least partly, by the psychology of moral agents, that is, by what in fact compromises agents’ basic interests or adversely affect their welfare to an unreasonable degree.)

One might worry that, since ‘betraying trust’ can mean different things, our use of it might be conceptually untidy. We therefore clarify the theoretical grounds, and consequently the validity, of our use of the notions of trust and betrayal of trust.

It is widely accepted that trust is never placed on someone nonspecifically but always with respect to some particular kind of performance. As Russell Hardin (Reference Hardin2002: 9) put it, trust is ‘a three-part relation: A trusts B to do X’. This then is also the case with respect to the investors in our experiment; and in the context of the extremely specific interaction they have with the consultants, it seems prima facie clear that their trust, to the extent it exists, refers to the expectation that the consultants not deceive them.

All deception necessarily involves betrayal of trust (in some sense; correspondingly, the very possibility of deception presupposes a background of trust). This underlies the cogency of the comparison we draw among the three forms (modes) of deception. It has been argued (for example, Chisholm and Feehan Reference Chisholm and Feehan1977) that assertions constitute a unique ‘invitation to trust’; but even if that is true, it does not follow that betrayal of trust is restricted to lying. As Bernard Williams (Reference Williams2002: 94) puts it, ‘Truthfulness is a form of trustworthiness, that which relates in a particular way to speech’. He stresses, ‘Trustworthiness is more than the avoidance of lying’ (Reference Williams2002: 97). This is so since asserting is but a restricted part of speech. ‘There may be special circumstances in which it is understood that a hearer is to ignore everything about an assertion except its content, but they are very special. In general, in relying on what someone said, one inevitably relies on more than what he said’ (Reference Williams2002: 100). Trusting (‘relying on’) others to be truthful forms part of the bedrock of human communication in all its linguistic manifestations; hence, deceiving, by any means, involves betrayal of trust. (A closely related intuition on the non-uniqueness of lying with respect to conversational trust is found in Saul [Reference Saul2012b: 75–79]). These intuitions receive systematic support from Paul Grice's theory of language, according to which linguistic exchange relies on the assumption that interlocutors are (usually, to some degree) engaged in a cooperative enterprise. Hence typical linguistic exchange presupposes the cooperative principle, ‘Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged’ (Grice Reference Grice1989: 26). This refers to conversation in all its linguistic aspects—to the way things are said, not merely to what is said. Since the assumption of cooperation is an assumption of trustworthiness, trust refers to all aspects of linguistic expression. In parallel, deception via all our three forms constitutes a betrayal of trust.

According to Collin O'Neil, although a special invitation to trust is associated with some forms of deliberate communication and not others (and deception via these forms ‘misuses’ and ‘abuses’ trust), deception by any form of communication ‘consists in failing to perform as one is trusted to perform’ (Reference O'Neil2012: 306), and the wrong associated with this is ‘betrayal of trust’; indeed, ‘trust need not be invited for a betrayal of trust to occur’ (Reference O'Neil2012: 318). Hence, deception via all three forms betrays trust. Lastly, the intuitions above regarding trust are corroborated and enhanced by analysis in terms of ‘warrant of truth’ (Carson Reference Carson2010). Deception is not even possible in theatre play performance, in ‘bull-sessions’, etc. since in such interactions there is no presumption that truth is warranted. Conversely, if truth is warranted, then deception betrays trust, regardless of the form of deception.

Having established that the forms of deception are in principle comparable in terms of betrayal of trust, we can now address a skeptical challenge to the effect that our experimental method lacks sufficient information to render a moral verdict. The idea is that there may be different senses of (betrayal of) trust in play, and that each may have different moral weight. For instance, assuming that O'Neil's analysis (above) is right, the experimental results may fail to distinguish between the moral effects of ‘abusing trust’ versus of ‘betraying trust’. In response, the beautiful thing about the experimental setup is that it is not vulnerable to this potential difficulty. The various senses of trust (corresponding to the various senses of breaching trust), whatever they may be, de facto converge to an all-things-considered level of trust that is expressed in the bottom-line readiness to count on the consultant's word and stake an investment despite the risk (losing the entire investment). The crucial point is that the wrongness of betrayal of trust correlates directly with this all-things-considered position of trust—a bottom-line position of making oneself vulnerable to the other—which expresses the aggregate level of trusting (whatever its internal breakdown), and whose behavioral expression our experiment is constructed to measure! In other words, the wrongness of betrayal of trust is the wrongness of exploiting the position of vulnerability that the trusting other agrees to put herself in vis-à-vis the agent—and that vulnerability is precisely embodied in agreeing to take the risk of investing money that will be entirely lost, if the consultant decides to deceive.Footnote 9

(There are additional dimensions of trust and therefore betrayal of trust: notably, those relating to the kind of relationship one has with others (for instance betraying friends rather than strangers normally involves greater betrayal of trust), and to the type of scenario people are in (for instance deceiving under oath normally involves greater betrayal of trust). Those dimensions are excluded from our experimental scheme, which focuses exclusively on comparing forms of deception. But even if they were not, the randomization of participants would have an equalizing effect on them, and then it would create no problem if they too entered into the all-things-considered aggregate trust that is experimentally measured.)

Based on the considerations above, we conclude, our experimental results fail to support an important normative reason for the classical view. Previous work has rarely used experimental findings to adjudicate normatively between rival moral positions, and none we know of has done so via methods of experimental economics. Having reached a normative conclusion, we achieved all we can hope for from experimental moral philosophy; and yet, we may inquire further about the relative weight of that normative conclusion in an overall comparison of the three modes of deception. Comparing the modes of deception as such, we ex hypothesi keep intentions and results constant, and focus on candidates for being the intrinsic wrong-making features of deception (that is, the grounds of the judgment ‘the deceptive act as such is wrong’). When we do that, we find betrayal of trust as arguably the prominent candidate. We cannot argue for this view in this space, only mention that it seems much in the spirit of views such as Williams's (‘Truthfulness is a form of trustworthiness’). Other candidates admittedly exist (though not many)—for instance, manipulativeness, or an aesthetic flaw with ethical dimensions (Pepp Reference Pepp2019). We need not (and cannot) review all theoretical possibilities here (our discussion in the next section covers additional possibilities). What is important to emphasize, however, is that to the extent that philosophical reflection finds betrayal of trust as the only or most important intrinsic wrong-making feature of deception, then our experimental results produced not merely a moral reason, but the moral verdict on the dilemma we are investigating.

3.2 Conclusions from Equivalence among Consultants across Conditions

The experimental results show equivalence in the consultants' rate of deceiving across Lies, Falsely Implicating, and Nonverbal Deception. While this means that actual behavior does not follow the classical view, this piece of (merely) descriptive ethics is not what we are here after. Our interest is very different: it is in whether the equivalence in the consultants’ rate of deceiving across the conditions can undermine normative (moral) reasons for the classical view (and thus help adjudicate morally in the debate between the classical view and the equivalence thesis). The answer depends on the specific grounds for adopting the classical view. Below we discuss three salient possible grounds, and explain with respect to each how it can support drawing normative conclusions from our results.

The first ground for the classical view is that lying is worse per convention. That norms relating to truthfulness change geographically and historically seems well established (for example, Gächter and Schultz Reference Gächter and Schultz2016; Hugh-Jones Reference Hugh-Jones2016); the question of whether lying is worse or not may well be one aspect of that phenomenon. If it is, then all there is to say on the matter should be fully discoverable by experimental observation. That, in itself, would not further our purposes, however, since if experimental results of levels of deception are attributable to acting according to convention (descriptive ethics), then this cannot by itself satisfy the challenge of judging the soundness of moral reasons. Mere convention cannot command true normative (prescriptive) authority. Yet, while a ‘mere’ convention indeed does not furnish normativity, the following scheme can do so: (a) a plurality of relevant considerations fails to converge on one rational bottom line moral conclusion; (b) the conventional principle based on the summation of the relevant considerations, which is as justifiable as other alternatives, is reflectively endorsed as providing the obligating norm. Once a norm has undergone such a process, there can be strong moral reasons of fairness to act accordingly, that is, for one to reciprocate by doing one's fair share for the success of the social enterprise. (The nonreciprocator may not only be guilty of free-riding on others but may risk harming them, too. Think, for example, of disrespecting the admittedly arbitrary norm of driving on the right-hand side of the road.) Now it is plausible that such an account indeed holds true for the question of assessing the classical view versus the equivalence thesis. If it does, then our experimental results can yield a normative reason.

The specific story in our case could, for instance, be described roughly along the following lines.Footnote 10 Lying demands less preparation, effort, and imaginativeness than other, craftier deceptions. It can therefore be more easily and readily produced, and for that reason it poses a greater threat to social cooperation. On the other hand, a lie is a less deniable form of deceiving, and as such it is more vulnerable to exposition and hence less of a social threat. From a different perspective, lying is worse, since the success of other forms of deception depends on the inferences others make, which shifts part of the responsibility away from the non-lying deceiver. On the other hand, the lie is not worse, as it is at least a more ‘authentic’ way of concealing the truth, without resorting to treacherous techniques that implicate others in their own deception. From yet another perspective, lying expresses a worse attitude toward truthfulness, as the evasive quality of all other forms of deception is the result of maneuvers aimed at not lying—thus, ironically, those other forms of deception confirm the value of truthfulness. On the other hand, since the lie needs less preparation (as mentioned above), it can be more mindlessly executed, and so offers less of a testimony to lack of respect for truthfulness. And so on. Now it is entirely sensible to argue that there can be no way to sum up these various opposing considerations reliably into one rational objective conclusion, and that we therefore normatively endorse the prevailing social norm that expresses a holistic sensibility about this issue, whatever it happens to be. Adler (Reference Adler1997), in a similar vein, speaks in this context of a progression from social norm to ethical norm. After claiming that ‘a norm corresponding to the lessened demands of truthfulness for implicatures would be desirable for all’, Adler hastens to add, ‘Such a norm of conversation acquires moral force’ (Reference Adler1997: 451). If such (or sufficiently similar) is the ground offered for the classical view, then the equivalence in the consultants’ rate of deceiving across our experimental conditions undermines a normative reason for the classical view.

A second ground for the classical view is that lying is worse because it reveals a deeper antisocial attitude and as such is more sinister an expression of moral character and motivation. According to this view, it is psychologically more difficult for a decent person to lie than to deceive in other ways. (The conventional and psychological grounds are not mutually exclusive, but they are different. The convention may be a direct function of social value or utility that is irreducible to individual psychology.) Normal upbringing includes a long process of conditioning not to say what is false; this results in greater psychological difficulty to utter falsehoods compared with uttering truths, and this holds even when those truths are in the service of deception. Now normally and other things being equal, for a person to overcome greater inhibitions in order to commit a wrong suggests greater malice and to that extent exhibits greater deficiency of moral goodness and worth. In the terminology of economics, we would say that the psychological cost of lying is greater compared to non-lying deception, and hence that ceteris paribus lying testifies to a stronger motivation to deceive. This insight yields (one interpretation of) the classical view. (Motivation, in turn, may influence the rightness of actions, as cogently argued by Sverdlik [Reference Sverdlik1996].) The source of deeper inhibitions ultimately lies in the social psychology of communication. While non-lying deceptions are rather evasive ways of misleading others, which otherwise decent people may adopt in delicate social predicaments or under duress, outright lying is a more daring interpersonal position that requires a more shameless disposition. ‘The liar is more brazen’, observes Adler (Reference Adler1997: 442). This explains why many who find themselves unable to utter falsehoods in someone's face intentionally, resort to saying misleading truths or to performing various nonlinguistic actions intended to mislead. Basic competence in human communicative norms makes people acutely sensitive to the fact that uttering falsehoods in front of others is a more flagrant and jarring disruption of human communicative expectations, lies inspire a special ‘sense of violation or outrage’ (Frankfurt Reference Frankfurt2009: 50); they consequently encounter deeper inhibitions to lie. Again, the upshot is that lying testifies to a looser moral stance regarding deception. In other words: although lying is not inherently more evil, being the kind of creatures that we are, we experience it as more offensive; therefore, going ahead and lying involves ipso facto greater meanness, and by virtue of this is morally worse. This view supports the classical view.

If the ground of the classical view is a function of the greater psychological difficulty in lying, we would expect less deception in Lies compared to the other conditions. Rates of deception, however, were not significantly different. We can therefore conclude that to the extent that the moral explanation of the classical view is the greater malice in lying (as explained), our experimental results of equivalence in rates of deception in the three groups, again, undermine the classical view.

Finally, a third ground for the classical view involves the intrinsic nature of lying. The idea is that lying is inherently more deceptive, in the sense that the distance between falsehood and the truth is greater than the distance between a misleading truth and the truth (or between nonverbal gestures, whose truth value is vaguer, and the truth). Lying is thus a greater evasion of the truth and simultaneously an intrinsically greater deviation from truthfulness. It is thus more seriously immoral. Now if this is the conceptual ground for the classical view, then empirical findings about people's actual attitudes toward Lies, Falsely Implicating, and Nonverbal Deception can neither support nor oppose the classical view, normatively speaking. Accepting this ground seems therefore finally to draw a limit to the moral relevance of the experimental approach.

We argue that this impression is false. We ought to ask how we are to understand the idea that lying is ‘inherently more deceptive’. As explained, this presumably invokes the idea that what grounds the moral status of the different forms of deception is a conceptual truth regarding the epistemic properties of lying versus the other deceptions. But even if such a conceptual account about the greater distance between ‘lying’ and ‘truth’ is sound, it is not directly determinative of ‘level of deceptiveness’. The latter seems rather to be the empirical fact regarding the degree to which people are actually deceived by each of the different forms of deception. And it should be this latter fact concerning potency that is directly significant for the possible moral import (wrongness) of ‘inherent deceptiveness’. The logical or epistemic status of the different forms of deception vis-à-vis the notion of the truth is a different issue from the question of which form of deception in fact conceals the truth more effectively; only the latter correlates directly with moral wrongness. Now the question of which form of deception is more potent, or more effective in deceiving people, is not a question for armchair theorizing but a testable aspect of human communicative interaction. If it turns out that people generally deceive more successfully by performing non-lying deceptions, then for all practical purposes it is not true that lying is ‘more deceptive’.

Suppose, for example, that I say ‘I climbed the rope’ (which I did), while simultaneously motioning climbing a ladder (which I did not). If, empirically, in such a situation, people tend to believe the motioning more, implicitly assuming that it is more reliable (that is, assuming that I must have mistakenly said ‘rope’ when in fact intended ‘ladder’), then if I know this fact about human information processing and use it to deceive more efficiently via nonverbal deception (as contrasted to lying by saying ‘ladder’ while correctly motioning climbing a rope, which would result in less misleading on average), then we must conclude that nonverbal deception is worse, as far as the parameter of being singularly more deceptive is concerned. Another, more fantastic, example: if we all lived in Pinocchio's world, where lies (only) would immediately and universally manifest themselves on our noses, then lying would be consistently less deceptive than the other forms of deception. The point here is that the questions raised by such examples are empirical, despite the conceptual guise of ‘(level of) inherent deceptiveness’.

We argue that the combination of our results regarding investors and consultants suggests an interesting conclusion in this respect. Recall that the equivalence among investors meant that people trust potential deceivers (to send truthful messages) to a similar degree in all three conditions. Since the extent of deceiving by consultants in the three conditions was also similar, the combination of the results regarding investors and consultants suggests that the level of deceptiveness of the three forms of deception is, as a matter of fact, similar (that is, if similar amount of deceiving among conditions has similar deceptive impact, then the singular potency of deceptiveness is equivalent among conditions)! Hence, what seemed prima facie to be a conceptual point that poses a rigid limit to the empirical investigation of the ethics of deception, turns out upon further reflection to be a function of our very experiment. Our experimental scheme can deliver a normative verdict even with respect to a parameter that seemed beyond empirical reach. Our results seem to align with the equivalence thesis—or again, strictly speaking, undermine another possible reason for the classical view.

4. Concluding Remarks

While the normative question of the moral gradation of forms of deception has been debated for long, it has thus far not been recognized that this question can to a significant extent, if not predominantly, be addressed empirically. Demonstrating this was the objective of this research. Moreover, the potential fruitfulness in using methods of experimental economics in experimental moral philosophy, with the aim of drawing normative conclusions, has thus far remained untapped. In this essay, we have operationalized the normative dilemma regarding forms of deception in terms of a strategic game and were able to draw normative conclusions from our results.

Typical vignette-cum-questionnaire-based experimental studies poll people's moral judgments, yet normative conclusions obviously cannot be inferred validly from such psychological facts. At best, the common folk view could figure as one component in a holistic ‘inference to the best explanation’ (which is the limited kind of normative yield one may attempt to find in experiments in moral philosophy, usually). In contrast, our method has been to identify implicit behavioral assumptions (‘empirical commitments’) of moral views and to test those experimentally. Thus, given that breaching trust is morally wrong and that breaching greater trust is ceteris paribus morally worse than breaching lesser trust, we can then test which kind of deception breaches greater trust, by operationalizing level of trust as amounts of money invested in response to a potentially informative yet potentially deceptive message. This can yield moral arguments about degrees of wrongness. (As mentioned, our experiment does also yield results in traditional terms of descriptive ethics—these suggest that the classical view does not reflect folk normative commitments.)

A potential concern could be that participants perceive the strategic interaction as a ‘mere game’ where deception need not be considered morally problematic (as in, say, a game of poker). We believe this is not a significant problem for several reasons. First, the norm against deception is both strong and deeply entrenched. While it can be waived in particular situations, no indication for such waiving was hinted at in explaining the experiment to the subjects. (The title of our essay, ‘The Deceiving Game’, was not used in the experiment.) Moreover, deception in the experiment causes monetary loss to the deceived, which invokes the even stronger norm against fraudulent behavior. Up-to-date meta-analyses show substantial evidence that behavior of participants in laboratory experiments conforms to moral norms against deception (Abeler, Nosenzo, and Raymond Reference Abeler, Nosenzo and Raymond2019; Gerlach, Teordescu, and Hertwig Reference Gerlach, Teodorescu and Hertwig2019). In particular, participants' behavior in our experiment attests that deception was indeed perceived as misconduct. Sans moral considerations, the game-theoretical solution of our game requires that the messages be ignored as noninformative. The intuition behind this is clear: if the investors respond to the messages, consultants can only gain by choosing the message that maximizes investments, regardless of what they observe (for a treatment of cheap talk in strategic games, see Crawford and Sobel Reference Crawford and Sobel1982). In contrast, de facto, messages are strongly contingent on the observed balls, and investors increase their average investment by approximately one third of their endowment if the consultant chooses the BLUE message. Both results cannot be explained if participants perceive the situation as a game free of ethical constraints.

The classical view has been the prominent view about forms of deception for millennia, and still is today. Our experimental results together with our normative analyses pose a new (kind of) challenge to the classical view.

To be sure, the conclusions reached here are not final pronouncements. Our results should be extended and tested in some notable directions. These directions include the following. (1) We identified testable empirical commitments at the basis of the normative arguments. To the extent that behavior varies across cultures, the normative conclusions might vary correspondingly. We tested these commitments with subjects hailing from a WEIRD (Western, educated, industrialized, rich, and democratic) society (Henrich, Heine, and Norenzayan Reference Henrich, Heine and Norenzayan2010); additional tests with more diverse populations are required to ascertain the generality of the conclusions. (2) There is value in extending the tests to different contexts or scenarios, beyond the rather abstract and impersonal presentation of the original Deceiving Game. (3) Allowing repeated interactions and cross-examinations between subjects would simulate an important dimension of real-life communication. And so on. The contribution of this essay is rather in presenting the basic experimental idea, showing how it can be put into actual practice, reporting seminal results, and providing a detailed analysis of how normative conclusions can be derived from them. This should provide a firm basis for future treatments.

The holy grail of experimental moral philosophy is in offering support for normative judgments. Yet experimental results cannot establish what the relative weight of such moral reasons is or, a fortiori, that a normative reason is trumping all others and is therefore the all-things-considered moral recommendation. Only philosophical analysis relying on philosophical theory can yield such conclusions. The more certain we are that the experimental design covers all plausible moral hypotheses, the weightier will be the normative conclusions derived from the experiment. And then, if all results point to the same conclusion, we can hope to approach normative knowledge asymptotically. This essay did not, as it cannot, refute the classical view; it did, however, illustrate how experimental results can inform and influence the normative debate between two moral positions.

Supplementary Material

To view supplementary material for this article, please visit https://doi.org/10.1017/apa.2020.33.

Footnotes

We thank Shaul Shalvi for his important role in initiating this research, Ori Regev for providing great assistance in conducting Experiment 1, Uri Leibowitz and Max Hayward for very useful comments on an early draft, and John Doris for insightful suggestions in conversation. Various versions of the essay were presented at the DMEP-Ratio workshop, Jerusalem 2016; Buffalo Annual Experimental Philosophy Conference, University at Buffalo, 2016; Rocky Mountain Ethics Congress, University of Colorado–Boulder, 2017; Economic Science Association European meeting, Dijon 2019; Israel Organizational Behavior Conference, Tel Aviv 2020; London Judgment and Decision Making Seminar, 2020; ECONtribute workshop on social image and moral behavior, Cologne 2021. We thank participants in all these venues for helpful comments. Thanks also to two dedicated anonymous referees for this journal.

1 Mahon (Reference Mahon and Zalta2015) articulates the traditional definition of lying as ‘to make a believed-false statement to another person with the intention that the other person believe that statement to be true’. There is a debate in the philosophical literature as to whether lying requires an intention to deceive. In this essay, by lying, we mean deceptive lying.

2 Nonverbal is often used interchangeably with nonlinguistic, but it can in effect be linguistic (when there is a robustly established meaning to the nonverbal gesture). This distinction is at times difficult to draw, but for this essay, it is inconsequential.

3 The two central views in the debate, the classical view and equivalence thesis, are not the only positions possible. Clea Rees (Reference Rees2014) has argued that falsely implicating (in her words, ‘merely deliberately misleading,’ 59) is worse than lying, and other positions are also theoretically possible (e.g., that nonverbal deception is morally the worst). Because the substantive argument in what follows is the rejection of the classical view (not the adoption of any view), the existence of such (minor or theoretical) alternative positions does not affect our argument or conclusions.

4 There is little literature in psychology that investigates the comparison of different forms of deception specifically (for example, Rogers et al. Reference Rogers, Zeckhauser, Gino, Norton and Schweitzer2017) but that literature does not engage in normative analysis as such. There is also little experimental philosophical work on people's usage of the relevant concepts (for example, Weissman and Terkourafi Reference Weissman and Terkourafi2019), but that work too does not refer to our normative debate.

5 The method of experimental economics creates a microeconomic system in laboratory conditions. By designing the way in which participants’ decisions translate into actual payoffs, the experimenter is able not only to control the environment and the institutions of the system but also to induce preferences. See, for example, Smith (Reference Smith1994).

6 To wit, since normative positions typically aim to be prescriptive to creatures like us, they ought to presuppose some understandings of how, in fact, we can and do operate; these presuppositions (or a subgroup of these) are their ‘empirical commitments’. Schroeder, Roskies, and Nichols describe this lucidly with regard to the question of moral motivation: ‘accounts of moral motivation typically presuppose commitments regarding the nature of psychological states such as beliefs, desires, choices, emotions, and so on, together with commitments regarding the functional and causal roles they play. Observations about the nature and the functional and causal roles of psychological states, it seems to us, are as much empirical as they are philosophical. At least, it is rather obscure how such claims are to be understood, if they are not to be understood as involving substantial empirical elements’ (Schroeder, Roskies, and Nichols Reference Schroeder, Roskies, Nichols and Doris2010: 78–79).

7 Inferiority tests are the one-sided version of equivalence tests (Lakens, Scheel, and Isager Reference Lakens, Scheel and Isager2018), and are appropriate when the question of interest is whether there is an effect in a predicted direction (see Rothman, Wiens, and Chan, Reference Rothman, Wiens and Chan2011). Indeed, in this experiment we cannot reject with confidence the hypothesis that consultants deceive more in Lies, however since we are testing the arguments in favor of the classical view, this does not reflect on our conclusions.

8 The actual color in Experiment 2 was green, not blue. We refer to the high signal as BLUE for consistency.

9 An even stronger position on this issue would be that the bottom-line (betting) behavior (is not merely correlated with the wrongness of betraying trust, but rather) constitutes the very meaning of ‘trust’ (in the truthfulness of reporting). This could express venerable philosophical views such as Ludwig Wittgenstein's (Reference Wittgenstein1953) ‘meaning in use’.

10 The following arguments, extracted from the literature on deception, are cited here to demonstrate the de facto plurality of nonconverging views; it is not our intention or indeed business in this context to argue for any of them over any other.

References

Abeler, Johannes, Nosenzo, Daniele, and Raymond, Collin. (2019) ‘Preferences for Truth-Telling’. Econometrica, 87, 1115–53.CrossRefGoogle Scholar
Adler, Jonathan. (1997) ‘Lying, Deceiving, or Falsely Implicating’. Journal of Philosophy, 94, 435–52.CrossRefGoogle Scholar
Berstler, Sam. (2019) ‘What's the Good of Language? On the Moral Distinction between Lying and Misleading’. Ethics, 130, 531.CrossRefGoogle Scholar
Bicchieri, Cristina. (2006) The Grammar of Society: The Nature and Dynamics of Social Norms. New York: Cambridge University Press.Google Scholar
Bok, Sissela. (1989) Lying: Moral Choice in Public and Private Life. New York: Vintage.Google Scholar
Carson, Thomas. (2010) Lying and Deception: Theory and Practice. Oxford: Oxford University Press.CrossRefGoogle Scholar
Chisholm, Roderick, and Feehan, Thomas. (1977) ‘The Intent to Deceive’. Journal of Philosophy, 74, 143–59.CrossRefGoogle Scholar
Cohen, Jacob. (1988) Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale: Erlbaum.Google Scholar
Crawford, Vincent P., and Sobel, Joel. (1982) ‘Strategic Information Transmission’. Econometrica, 50, 1431–51.CrossRefGoogle Scholar
Fischbacher, Urs. (2007) ‘z-Tree: Zurich Toolbox for Ready-Made Economic Experiments’. Experimental Economics, 10, 171–78.CrossRefGoogle Scholar
Frankfurt, Harry. (2009) On Bullshit. Princeton: Princeton University Press.CrossRefGoogle Scholar
Gächter, Simon, and Schultz, Jonathan F.. (2016) ‘Intrinsic Honesty and the Prevalence of Rule Violations across Societies’. Nature, 531, 496–99.CrossRefGoogle ScholarPubMed
Gerlach, Philipp, Teodorescu, Kinneret, and Hertwig, Ralph. (2019) ‘The Truth about Lies: A Meta-analysis on Dishonest Behavior’. Psychological Bulletin, 145, 144.CrossRefGoogle ScholarPubMed
Greiner, Ben. (2015) ‘Subject Pool recruitment Procedures: Organizing Experiments with ORSEE’. Journal of the Economic Science Association, 1, 114–25.CrossRefGoogle Scholar
Grice, Paul (1989) Studies in the Way of Words. Cambridge MA: Harvard University Press.Google Scholar
Hardin, Russell. (2002) Trust and Trustworthiness. New York: Russell Sage Foundation.Google Scholar
Henrich, Joseph, Heine, Stephen J., and Norenzayan, Ara. (2010) ‘The Weirdest People in the World?’ Behavioral and Brain Sciences, 33, 6183.CrossRefGoogle ScholarPubMed
Hugh-Jones, David. (2016) ‘Honesty, Beliefs about Honesty, and Economic Growth in 15 Countries’. Journal of Economic Behavior & Organization, 127, 99114.CrossRefGoogle Scholar
Koopman, P. A. R. (1984) ‘Confidence Intervals for the Ratio of Two Binomial Proportions’. Biometrics, 40, 513–17.CrossRefGoogle Scholar
Lakens, Daniël, Scheel, Anne M., and Isager, Peder M.. (2018) ‘Equivalence Testing for Psychological Research: A Tutorial’. Advances in Methods and Practices in Psychological Science, 1, 259–69.CrossRefGoogle Scholar
Mahon, James Edwin. (2015) ‘The Definition of Lying and Deception’. In Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2016/entries/lying-definition/.Google Scholar
O'Neil, Collin. (2012) ‘Lying, Trust, and Gratitude’. Philosophy & Public Affairs, 40, 301–33.CrossRefGoogle Scholar
Pepp, Jessica. (2019) ‘The Aesthetic Significance of the Lying-Misleading Distinction’. British Journal of Aesthetics, 59, 289304.CrossRefGoogle Scholar
Rees, Clea. (2014) ‘Better Lie!Analysis, 74, 5964.CrossRefGoogle Scholar
Rogers, Todd, Zeckhauser, Richard, Gino, Francesca, Norton, Michael I., and Schweitzer, Maurice E.. (2017) ‘Artful Paltering: The Risks and Rewards of Using Truthful Statements to Mislead Others’. Journal of Personality and Social Psychology, 112, 456–73.CrossRefGoogle ScholarPubMed
Rothman, Mark D., Wiens, Brian L., and Chan, Ivan. (2011) Design and Analysis of Non-inferiority Trials. Boca Raton: Chapman & Hall.Google Scholar
Saul, Jennifer. (2012a) ‘Just Go Ahead and Lie’. Analysis, 72, 39.CrossRefGoogle Scholar
Saul, Jennifer. (2012b) Lying, Misleading, and What Is Said: An Exploration in Philosophy of Language and in Ethics. Oxford: Oxford University Press.CrossRefGoogle Scholar
Savage, Leonard. (1954) The Foundations of Statistics. New York: John Wiley and Sons.Google Scholar
Schroeder, Timothy, Roskies, Adina, and Nichols, Shaun. (2010) ‘Moral Motivation’. In Doris, John (ed.), The Moral Psychology Handbook (Oxford: Oxford University Press), 72110.CrossRefGoogle Scholar
Shiffrin, Seana. (2014) Speech Matters: On Lying, Morality, and the Law. Princeton: Princeton University Press.Google Scholar
Smith, Vernon L. (1994) ‘Economics in the Laboratory’. Journal of Economic Perspectives, 8, 113–31.CrossRefGoogle Scholar
Strudler, Alan. (2010) ‘The Distinctive Wrong in Lying’. Ethical Theory and Moral Practice, 13, 171–79.CrossRefGoogle Scholar
Sverdlik, Steven. (1996) ‘Motive and Rightness’. Ethics, 106, 327–49.CrossRefGoogle Scholar
Webber, Jonathan. (2013) ‘Liar!Analysis, 73, 651–59.CrossRefGoogle Scholar
Weissman, Benjamin, and Terkourafi, Marina. (2019) ‘Are False Implicatures Lies? An Empirical Investigation’. Mind & Language, 34, 221–46.CrossRefGoogle Scholar
Williams, Bernard. (2002) Truth and Truthfulness. Princeton: Princeton University Press.Google Scholar
Wittgenstein, Ludwig. (1953) Philosophical Investigations. New York: Macmillan.Google Scholar
Figure 0

Figure 1. Deception and Trust in Experiment 1

Figure 1

Figure 2. Deception and Trust in Experiment 2

Supplementary material: File

Cohen and Zultan supplementary material

Appendices

Download Cohen and Zultan supplementary material(File)
File 42.8 KB