Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-26T05:12:21.577Z Has data issue: false hasContentIssue false

It’s still bullshit: Reply to Dalton (2016)

Published online by Cambridge University Press:  01 January 2023

Gordon Pennycook*
Affiliation:
Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo ON, Canada, N2L 3G1
James Allan Cheyne
Affiliation:
Department of Psychology, University of Waterloo
Nathaniel Barr
Affiliation:
The School of Humanities and Creativity, Sheridan College
Derek J. Koehler
Affiliation:
Department of Psychology, University of Waterloo
Jonathan A. Fugelsang
Affiliation:
Department of Psychology, University of Waterloo
Rights & Permissions [Opens in a new window]

Abstract

In reply to Dalton (2016), we argue that bullshit is defined in terms of how it is produced, not how it is interpreted. We agree that it can be interpreted as profound by some readers (and assumed as much in the original paper). Nonetheless, we present additional evidence against the possibility that more reflective thinkers are more inclined to interpret bullshit statements as profound.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2016] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Reply

Bullshit has been defined as something that is constructed without concern for the truth (Reference FrankfurtFrankfurt, 2005). By this definition, bullshit statements can be true, false, or meaningless. The absence or presence of these factors is irrelevant to something being bullshit. Nonetheless, although bullshit statements can be incidentally true, bullshit is generally false and hence, often problematic.

In our initial investigation of bullshit, we focused on statements that consisted of randomly selected buzzwords (Reference Pennycook, Cheyne, Barr, Koehler and FugelsangPennycook et al., 2015). We used 20 different statements across 4 different studies (excluding items from Deepak Chopra’s Twitter feed, which we will not discuss further in this reply). Examples include “wholeness quiets infinite phenomena” (wisdomofchopra.com) and “we are in the midst of a high-frequency blossoming of interconnectedness that will give us access to the quantum soup itself” (sebpearce.com/bullshit). We labelled these statements “pseudo-profound bullshit” because: 1) they were constructed absent any concern for the truth and, generally, for that reason, 2) they do not consistently have unambiguous meaning, though they can sometimes interpreted by at least some people to have profound meaning.

In his commentary, Dalton notes, as do we, that at least some randomly generated statements can be taken as meaningful by some readers. Dalton takes this claim to have methodological implications but we will argue below this cannot be true without further untenable assumptions. We take Dalton’s claim to actually be conceptual, if not philosophical. Specifically, Dalton’s conceptual point appears to be based on a radical reader-response theoretical position in which the meaning of a text is solely what the reader makes of it. From such a perspective it is, of course, not possible to say, a priori, that any text is ultimately and always meaningless as some reader will always have the last say. Dalton argues, it is (and will always be) possible for at least someone, somewhere, to find (or perhaps more aptly “construct”) meaning, or what they take to be meaning, by sufficiently contemplating any statement. We, of course, agree as the very premise of our study was that people would report sentences designed without regard to meaning to be at least somewhat profound. Bullshit, following the definition offered by Frankfurt, however, depends on the intentions (or lack thereof) of the person uttering or writing the relevant statements. Bullshit that is viewed as profound is still bullshit.

As a consequence, without endorsing a radical reader-response theory, we note that Dalton’s primary point is consistent with the goal of our study. Namely, we hypothesized that people would indeed report randomly generated statements as not only meaningful but profound and, moreover, that people would vary in this propensity. This expectation was based on the assumption that people will find, or suppose that they have found, meaning in such statements. The very goal of the study was to investigate this tendency empirically, not to argue, as Dalton states, that: “if one cannot immediately discern meaning in something it is automatically bullshit.” It is important to recognize that even if we take a radical reader-response position, the only constraint on our original study was in the use of “pseudo-profound” as a label for the random sentences. Because they were constructed without any concern for the truth, they are bullshit (by the Frankfurt definition we followed in our original paper). Dalton appears to assume that our use of the term bullshit implies some sort of value judgment, as it often is in everyday use. In contrast, and following Frankfurt’s lead, we used bullshit as a technical term. This is the way that we hope it continues to be used in the academic literature.

With regard to Dalton’s claim that his argument is methodological, we note that this cannot be so unless one assumes that a sample of random computer-generated statements have an equal probability of being interpreted as meaningfully profound as human-generated statements intended to be meaningful (if not profound). Methodologically, the key word in the forgoing sentence is sample. In our study we found and reported that a sample of human-generated profound quotations (e.g., “A wet person does not fear the rain”) were rated as more profound than samples of computer-generated random sentences (see Studies 3 and 4). Thus, the equality of meaning assumption was demonstrably false for our study.

In reference to the “Wholeness quiets infinite phenomena” example, Dalton takes a phenomenological stance to meaning: “To engage with a passage like this we need to contemplate it for more than a few seconds, perhaps a few minutes (or hours, days, or months) and watch what happens to our mind – this is the appropriate first person subjective experience and more appropriate outcome of interest.” In the research under discussion, however, such an approach is not only inappropriate, but altogether irrelevant. Our interest, in this initial study, was not in the first-person phenomenology of readers’ subjective experience (though it might constitute a possible and interesting subsequent line of research) but simply in participants’ profundity ratings of statements designed to be lacking in that very quality (i.e., as a way to index one’s receptivity to bullshit). Dalton’s argument does imply the interesting and plausible hypothesis that people who are more reflective, clever, and/or linguistically adept will be more apt to construct meaning in the ambiguous statements. Unfortunately for this hypothesis, increasing reflectivity was negatively associated with reporting greater profundity in bullshit statements. One possibility is that more reflective people were indeed more able to find meaning where none was intended but are also likely to realize that the meaning was constructed through their own cognitive efforts rather than by the ostensible author of the statements.

One potential response to this line of reasoning is that it may be that only some of the randomly generated statements are potentially meaningful to some readers or, perhaps more precisely, that the statements varied in the ease with which they could be assigned some meaning. Dalton notes, for example that he cannot derive meaning from the following statement: “We are in the midst of a high-frequency blossoming of interconnectedness that will give us access to the quantum soup itself”, though we suspect that some people might. This possibility suggests the further hypothesis that the association between our variables of interest and profundity ratings for the bullshit items might vary as a function of the ease of constructing profound meaning from randomly generated sentences. To test this, we created two new scales using a subset of the items from Study 2. The “more profound” scale took the mean from the 5 items that were assigned the highest average profundity rating and the “less profound” scale consists of the 5 lowest scoring items (see Table 1). Both scales had acceptable internal consistency (Cronbach’s α = .81 and .75 for the relatively more and less profound items, respectively). As is evident from Table 2, the two scales performed very similarly. Indeed, heuristics and biases performance was significantly more strongly correlated with the scale that consists of the more profound items (r = –.36) than it was with the less profound items (r = –.23). The correlations were significantly different from one another according to a William’s test, t(187) = 2.08, p = .038, though both coefficients are significantly different from zero (p’s < .01).

Table 1: The 5 most and least profound bullshit statements in Pennycook et al.’s (2015) Study 2.

Table 2: Re-analysis of Pennycook et al.’s Study 2. Pearson product-moment correlations for 5 most profound and 5 least profound bullshit items. These data are for the full sample (N = 187). *** p < .001, ** p < .01, * p < .05.

*** p < .001

** p < .01

* p < .05.

This pattern of results is at variance with what we take to be the implications of Dalton’s argument. Specifically, if the observation that some participants may find transcendence in our bullshit items constrains our results, bullshit items that are more likely to be subjectively meaningful for participants should be less strongly negatively correlated (or even positively correlated) with analytic thinkingFootnote 1. Our results indicate that, if anything, relatively more profound bullshit is more strongly negatively correlated with analytic thinking; perhaps because it is more difficult to detect that they are, in fact, bullshit.

2 Conclusion

That it is possible for someone to find meaning in a statement does not prevent it from being bullshit. Indeed, bullshit that is not found at least somewhat meaningful would be rather impotent. Consider the evangelizing of politicians and so-called spin-doctors, for example. Often, their goal is to say something without saying anything; to appear competent and respectful without concerning themselves with the truth. It is not the understanding of the recipient of bullshit that makes something bullshit, it is the lack of concern (and perhaps even understanding) of the truth or meaning of statements by the one who utters it. Our original study concluded that people who are receptive to statements randomly generated without concern for meaning (i.e., bullshit) are less, not more, analytic and logical as well as more intelligent. Dalton’s commentary does not undermine this conclusion.

Footnotes

*

Funding for this study was provided by the Natural Sciences and Engineering Research Council of Canada.

1 To be clear, Dalton does not propose this analysis or any mechanism that might explain why more profound bullshit would be differentially associated with analytic thinking. Rather, our point is that this is a necessary condition for Dalton’s observation that some bullshit items are more (or, perhaps, genuinely) profound to constrain the results of our original studies. Put differently, our inclusion of items that are viewed as relatively more profound does not confound or constrain our findings in any way.

References

Dalton, C. (2016). Bullshit for you; transcendence for me. A commentary on “On the reception and detection of pseudo-profound bullshit”. Judgment and Decision Making, 11, 121122.CrossRefGoogle Scholar
Frankfurt, H. G. (2005) On Bullshit. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J. & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10, 549563.CrossRefGoogle Scholar
Figure 0

Table 1: The 5 most and least profound bullshit statements in Pennycook et al.’s (2015) Study 2.

Figure 1

Table 2: Re-analysis of Pennycook et al.’s Study 2. Pearson product-moment correlations for 5 most profound and 5 least profound bullshit items. These data are for the full sample (N = 187). ***p < .001, **p < .01, *p < .05.