1 Introduction
Rational action requires evidence. Given that beliefs inform action, beliefs ought to be informed by evidence. A longstanding broad perspective on human cognition holds that reason is, at least to some extent, responsible for accurate belief formation (Reference BaronBaron, 2008; Reference Kohlberg and GoslinKohlberg, 1969; Reference PiagetPiaget, 1932; Reference StanovichStanovich, 2005). However, the human capacity to revise beliefs in the face of conflicting evidence is, charitably, imperfect. Humans are prone to motivated reasoning (Reference KundaKunda, 1990), identity protective cognition (Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and MandelKahan et al., 2012), confirmation bias (Reference Gignac and SzodoraiNickerson, 1998), myside bias (Reference PerkinsPerkins, 2019; Reference Stanovich, West and ToplakStanovich, West & Toplak, 2013), naïve realism (Reference Ross, Ward, Reed, Turiel and BrownRoss & Ward, 1996), and bias blind spots (Reference Pronin, Lin and RossPronin, Lin & Ross, 2002). There is widespread disagreement about the role and consequences of the human capacity to reason.
Various analogies have been used to simplify the various broad perspectives on human thought and, although they may be oversimplifications, they illustrate the disagreement. For example, it has been argued that human reasoning is better characterized by analogy to that of lawyers than philosophers (Reference HaidtHaidt, 2012; Reference HaidtHaidt, 2001) – that is, the function of human reason is to form arguments to convince others, as is the goal of lawyers, and not necessarily to form accurate beliefs, as is the goal of philosophers (Reference MercierMercier, 2016; Reference Mercier and SperberMercier & Sperber, 2011). Of course, the analogy does not imply that people only reason like lawyers or like philosophers, but rather that the typical characteristics of human cognition are more similar to one frame of thinking than the other. To simplify, some researchers have disputed the common idea that reasoning facilitates sound judgment by pointing to cases (e.g., motivated reasoning) where explicit reasoning actually hurts judgment (Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and MandelKahan et al., 2012; Reference Kahan, Peters, Dawson and SlovicKahan, Peters, Dawson & Slovic, 2017).
Relatedly, given evidence that we rely heavily on a number of heuristics and biases (Reference Kahneman, Slovic and TverskyKahneman, Slovic & Tversky, 1982) and that unconscious processes have an (apparently) widespread impact on our decisions (Reference Bargh and ChartrandBargh & Chartrand, 1999), a prominent perspective is that explicit reasoning and deliberation is just not very effective in the context of powerful intuitions (e.g., Reference Bargh, Chaiken and TropeBargh, 1999; Reference Bargh, Schwader, Hailey, Dyer and BoothbyBargh, Schwader, Hailey, Dyer & Boothby, 2012; Reference Dijksterhuis and StrickDijksterhuis & Strick, 2016; Reference GigerenzerGigerenzer, 2007; Reference HaidtHaidt, 2001). One famous analogy is that human cognition is like an emotional (or intuitive) dog with a rational tail (Reference HaidtHaidt, 2001) (or, in a more recent analogy, an intuitive elephant and an analytic rider; Reference HaidtHaidt, 2012): That is, our capacity to reason does not effectively override our intuitions and emotional impulses.
These three perspectives can be summarized simplistically in terms of three general claims about the nature of human reasoning: 1) that reasoning prototypically helps make good decisions and come to informed beliefs (“reasoning is helpful”); 2) that reasoning is prototypically ineffective, since intuition dominates human cognition (“reasoning is helpless”); and 3) that reasoning prototypically undermines sound judgment and exacerbates motivated reasoning and (for example) political polarization (“reasoning is hurtful”). Although any of the three accounts may be the best explanation for the underlying psychology behind any given belief/opinion/value, the critical question here is which accounts offers the best broad description of high-level human cognition (i.e., which has the greatest explanatory power across various beliefs/opinions/values).
Although recent work has attempted to mediate between these three broad accounts by investigating individual differences in analytic thinking (e.g., Reference Pennycook, Fugelsang and KoehlerPennycook, Fugelsang & Koehler, 2015a; Reference Pennycook and RandPennycook & Rand, 2019b; Reference Pennycook and RandPennycook, 2018), this work is vague on the specific aspects of analytic thinking that support good thinking. Here we contend that people differ in terms of their explicit stance toward whether beliefs ought to change according to evidence and that this has major consequences for what beliefs, opinions, and values that they hold. That is, some may place stronger value in changing their beliefs and taking relevant evidence into account – and thereby (for example) take a stance toward reasoning that is more akin to a philosopher – whereas others may place stronger value in maintaining constancy and the defense of prior beliefs – and thereby take a stance toward reasoning that is more akin to a lawyer. Moreover, this meta-belief may impact what sort of beliefs individuals endorse as adults, indicating that reasoning really does have an impact on intuitive beliefs. The goal of the present work is to investigate these possibilities.
1.1 Is reasoning helpful or helpless?
Dual-process theories of reasoning, which distinguish from autonomous (intuitive) processes and those that are accomplished via some form of deliberative control (Reference De NeysDe Neys, 2017; Reference Evans and StanovichEvans & Stanovich, 2013; Reference Pennycook, Fugelsang and KoehlerPennycook, Fugelsang & Koehler, 2015b; Reference Thompson, Prowse Turner and PennycookThompson, Prowse Turner & Pennycook, 2011), typically emphasize how controlled reasoning processes can override (sometimes) incorrect intuitive responses. Although this emphasis does not imply that reasoning and accuracy are synonymous, it does suggest that there are meaningful and important cases where such an association is present (Reference EvansEvans, 2012). To take a recent example, individuals who are more disposed toward thinking analytically (as indexed by the Cognitive Reflection Test; Reference FrederickFrederick, 2005) are less likely to fall for fake news regardless of whether it is consistent or inconsistent with their political ideology (Reference Pennycook and RandPennycook & Rand, 2019b). There is also evidence that analytic thinking is associated with disbelief in a variety of epistemically suspect beliefs (Reference Pennycook, Fugelsang and KoehlerPennycook et al., 2015a), such as in paranormal and religious beliefs (Reference Pennycook, Cheyne, Seli, Koehler and FugelsangPennycook, Cheyne, Seli, Koehler & Fugelsang, 2012; Reference Pennycook, Ross, Koehler and FugelsangPennycook, Ross, Koehler & Fugelsang, 2016; Reference Shenhav, Rand and GreeneShenhav, Rand & Greene, 2012), conspiratorial ideation (Reference Swami, Voracek, Stieger, Tran and FurnhamSwami, Voracek, Stieger, Tran & Furnham, 2014), anti-science beliefs (and specifically rejection of evolution) (Reference GervaisGervais, 2015), and pseudo-profound bullshit receptivity (Reference Pennycook, Cheyne, Barr, Koehler and FugelsangPennycook, Cheyne, Barr, Koehler & Fugelsang, 2015). In addition, there is evidence that reliance on intuition is associated with traditional moral values (Reference Pennycook, Cheyne, Barr, Koehler and FugelsangPennycook, Cheyne, Barr, Koehler & Fugelsang, 2014; Reference Royzman, Landy and GoodwinRoyzman, Landy & Goodwin, 2014) and conservative political ideology (Reference JostJost, 2017) (but perhaps more-so with political apathy; see Reference Pennycook and RandPennycook & Rand, 2019a).
In contrast, a large and diverse body of evidence supports the idea that reason is perhaps overrated. For instance, intuitive heuristics are often extremely useful and, in some contexts, may actually be more accurate than reasoned reflection (Reference GigerenzerGigerenzer, 2007; Reference Gigerenzer and ToddGigerenzer, Todd & ABC Research Group, 1999). Illustrative (albeit extreme) examples of this come from research on expertise (Reference KleinKlein, 2008), which shows that naturalistic decision making (e.g., among chess masters or firefighters) allows for very rapid yet extremely accurate choice (Reference Kahneman and KleinKahneman & Klein, 2009). Furthermore, social psychology in the 1990’s provided numerous examples of the surprising power of intuition (Reference Bargh and ChartrandBargh & Chartrand, 1999; Reference Dijksterhuis and StrickDijksterhuis & Strick, 2016; Reference HaidtHaidt, 2012). Although there have been questions about the replicability of some of these effects (e.g., for so-called “social priming” [Reference CesarioCesario, 2014]), the strong influence of intuition on decision making is not a matter of dispute (Reference EvansEvans, 2008; Reference Kahneman and KleinKahneman & Klein, 2009).
1.2 Does reasoning undermine sound judgment?
In contrast to the work just reviewed, there is also considerable evidence for motivated reasoning effects (Reference KundaKunda, 1990) – that is, cases where reasoning actively hurts sound judgment and causes people to become further entrenched in what they already believe (Reference Kahan, Peters, Dawson and SlovicKahan, 2013). For example, people tend to dismiss information that is inconsistent with their political ideology (Reference Lodge and TaberLodge & Taber, 2005; Reference RedlawskRedlawsk, 2002; Reference Redlawsk, Civettini and EmmersonRedlawsk, Civettini & Emmerson, 2010; Reference Strickland, Taber and LodgeStrickland, Taber & Lodge, 2011) and engage in biased search for information that is supportive of their beliefs (i.e., confirmation bias; Reference Knobloch-Westerwick, Mothes and PolavinKnobloch-Westerwick, Mothes & Polavin, 2017; Reference Gignac and SzodoraiNickerson, 1998). In fact, there is evidence that political polarization about contentious scientific issues (such as global warming) is actually greater among individuals who are more intelligent (Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and MandelKahan et al., 2012; Reference Kahan, Peters, Dawson and SlovicKahan, Peters, Dawson & Slovic, 2017; Reference Sarathchandra, Navin, Largent and McCrightSarathchandra, Navin, Largent & McCright, 2018) and who report having a more actively open-minded thinking style (Reference Kahan and CorbinKahan & Corbin, 2016; but see Reference BaronBaron, 2017).
A parsimonious broad account of these findings is that individuals engage analytic reasoning processes, not in the service of accuracy, but as a means to protect their identity (Reference Kahan, Peters, Dawson and SlovicKahan, 2013) or to form convincing arguments (Reference MercierMercier, 2016). This perspective flips the common conception of human reasoning on its head and suggests that reasoning often makes people more unreasonable. Consistent with this account, a recent meta-analysis indicated that partisan bias effects (motivated reasoning) were equivalent across the political spectrum (Reference Ditto, Liu, Clark, Wojcik, Chen, Grady and ZingerDitto et al., 2019; but see Reference Baron and JostBaron & Jost, 2019). This research indicates that reasoning is typically (or, at least, frequently) used in service of justifying prior beliefs, as opposed to updating them based on the evidence presented. To investigate this issue, we will focus on the idea that individuals who are more prone to engage in reasoning are more (not less) politically polarized. Consistent evidence for increased polarization among highly reflective people would indeed indicate that motivated reasoning is to be expected; to return to an earlier analogy, that humans reason more like lawyers than philosophers.
1.3 Actively open-minded thinking
Despite research showing evidence for motivated reasoning and the power of intuitions, the previously reviewed associations between analytic thinking and various beliefs/values suggests that reasoning is nonetheless used to modify beliefs in everyday life (although other factors are of course involved in determining what people believe). That is, people who are more reflective when they are given a trick question from the Cognitive Reflection Test (CRT) have different beliefs than intuitive people. A parsimonious explanation of this is that the same people who reflect on the CRT also tend to reflect about their beliefs (i.e., they use reason to modify beliefs). Nonetheless, the disposition to engage analytic thinking is not the same as having an actively open-minded stance in general (Reference Baron, Chipman, Segal and GlaserBaron, 1985; Reference Stanovich and WestStanovich & West, 1997) or toward evidence in particular (Reference Baron and JostBaron, 2019; Reference Baron, Scott, Fincher and Emlen MetzBaron, Scott, Fincher & Metz, 2015). Indeed, analytic thinking may be used to both override intuitions (i.e., to modify or undermine prior beliefs) or to rationalize or bolster intuitions (i.e., to reinforce prior beliefs) (Reference Pennycook, Fugelsang and KoehlerPennycook, Fugelsang, et al., 2015b).
In the present work, we will focus instead on people’s beliefs about whether beliefs and opinions should change according to evidence. Moreover, we will investigate a wide range of beliefs, values, and opinion together as a way to systematically assess the potential long-term impact of people’s thinking style on what they think.
The idea that some people may not be disposed to using evidence to inform their beliefs has been broached previously. For example, people may differ in their “criteria” for belief; although some hold that evidence and scientific consensus are most important, others believe that “knowledge of the heart” should also be a central consideration (Reference Metz, Weisberg and WeisbergMetz, Weisberg & Weisberg, 2018). Indeed, the actively open-minded thinking scale (AOT) was created to assess (in part) the belief that it is good to seek evidence that may conflict with intuitions (Reference BaronBaron, 2008; Reference Baron, Scott, Fincher and Emlen MetzBaron et al., 2015; Reference Baron, Chipman, Segal and GlaserBaron, 1985; Reference Stanovich and WestStanovich & West, 1997; see also Reference Price, Ottati, Wilson and KimPrice, Ottati, Wilson & Kim, 2015, for a measure based more on self-report) – a tendency that is associated with improved decision making over and above intelligence or cognitive ability (Reference Stanovich and WestStanovich & West, 2000; Reference Stanovich and WestStanovich & West, 1998). Moreover, much like individual differences in cognitive reflection, high AOT has been linked to skepticism about supernatural claims (Reference Baron, Scott, Fincher and Emlen MetzBaron et al., 2015; Reference Pennycook, Cheyne, Barr, Koehler and FugelsangPennycook, Cheyne, Barr, Koehler & Fugelsang, 2014; Reference Svedholm and LindemanSvedholm & Lindeman, 2013) and superstition (Reference Sá, West and StanovichSá, West, & Stanovich, 1999), indicating that the AOT scale may index some aspects of openness to evidence in belief formation and revision. Indeed, Reference Svedholm-Häkkinen and LindemanSvedholm-Häkkinen and Lindeman (2018) found a “fact resistance” factor within the broader AOT measure that consists of items that ask about beliefs about changing beliefs according to evidence. As noted by Reference BaronBaron (2019), it is this “flexible thinking” dimension that is most central to the concept of AOT. Shortened versions of the AOT scale have also typically focused largely on the belief revision questions (Reference Baron, Scott, Fincher and Emlen MetzBaron et al., 2015; Reference Haran, Ritov and MellersHaran, Ritov & Mellers, 2013), which further suggests that these items are of particular relevance for the AOT’s predictive validity. Nonetheless, the broad consequences of this meta-belief across a variety of domains has not yet been systematically investigated despite having major relevance for several broad theories of human cognition. We will refer to our subscale simply as actively open-minded thinking about evidence (AOT-E). The items for our scale can be found in Table 1. Our AOT-E scale is not the same as has been used in the past, although some of the items are from previous (longer) versions of the AOT. For further information on how we derived the AOT-E scale from the larger full AOT scale, see the two validation studies presented in the supplementary materials.
1.4 Current work
Is reasoning prototypically helpful, helpless, or hurtful? One possibility that has not yet been broached is that the three perspectives are primarily describing different people. That is, people have different beliefs about whether beliefs should change according to evidence (“meta-beliefs”) and this has consequences for the effectiveness of their reasoning and, therefore, what types of beliefs that they hold. The goal of the present investigation is to determine whether AOT-E is correlated with as wide a variety of beliefs, values, and opinions as is feasible in a single study. If AOT-E is consequential, it should be associated with people’s stances on a number of important issues. To this end, we investigated conspiratorial, moral, paranormal, political, religious, and science beliefs.
2 Study 1
2.1 Method
2.1.1 Participants
American participants were recruited from Mechanical Turk on February 18th, 2016. We set our goal sample at 350 and over sampled 380 participants (assuming some degree of attrition due to random responding). Only 3 participants responded affirmatively when asked if they responded randomly at any point during the survey and 3 participants did not answer affirmatively when asked if they are fluent in English. The resulting sample (N = 375, Mean age = 35.8) consisted of 216 males and 158 females (1 participant did not indicate their gender).
2.1.2 Materials
Measures were converted into POMP scores, i.e.(raw-min)/(max-min), ranging from 0-100 (Reference Cohen, Cohen, Aiken and WestCohen, Cohen, Aiken & West, 1999). Data and materials for all studies are available on OSF: https://osf.io/xqzse/.
AOT-E.
We administered the AOT-E scale that is presented in Table 1. Participants responded on a scale from 1) Strongly disagree to 6) Strongly agree. The AOT-E had strong reliability (α = .87). Participants rated themselves as, on average, willing to change their beliefs according to evidence (M = 69.8, SD = 19.1 – scale ranges from 0–100). Only 19.2% of the participants were at or below the scale midpoint (indicating a resistance to evidence).
Conspiracist ideation.
Participants completed a 15-item general conspiracy beliefs scale (Brotherton, French & Pickering, 2013). The scale included items such as “A small, secret group of people is responsible for making all major world decisions, such as going to war” (α = .97). Responses were made on the following 5-point scale: 1) Definitely not true, 2) Probably not true, 3) Not sure/cannot decide, 4) Probably true, 5) Definitely true.
Paranormal belief.
Participants completed a slightly revised Paranormal Belief Scale (Reference Pennycook, Cheyne, Seli, Koehler and FugelsangPennycook, Cheyne, Seli, Koehler & Fugelsang, 2012; Reference TobacykTobacyk, 2004) with the religious belief items excluded (α = .95). The scale consisted of 22 items sampled from 6 categories of supernatural belief (example items in parentheses): Psi (“Mind reading is possible”), Witchcraft (“Witches do exist”), Omens of luck (“Black cats can bring bad luck”), Spiritualism (“It is possible to communicate with the dead”), Extraordinary life forms (“The Loch Ness monster of Scotland exists”) and Precognition (“Astrology is a way to accurately predict the future”). Participants indicated their belief by responding on a 7-point scale from 1) Strongly disagree, to 4) Uncertain, to 7) Strongly agree.
God Skepticism.
Skepticism about God was assessed using the following question: “What sort of God, if any, do you believe in?” and presenting the following options of increasing skepticism (Reference Pennycook, Cheyne, Seli, Koehler and FugelsangPennycook et al., 2012; Reference Pennycook, Ross, Koehler and FugelsangPennycook, Ross, et al., 2016): 1) A personal God [Theism], 2) God as an impersonal force [Pantheism], 3) A God who created everything, but does not intervene in human affairs [Deism], 4) Don’t know whether or not any Gods exist [Negative Agnostic], 5) Don’t know whether or not any Gods exist and no one else does either [Positive Agnostic], 6) I don’t believe in Gods of any sort [Negative Atheist], and 7) I believe that God does not exist [Positive Atheist].
Moral values.
We used Pennycook, Cheyne, Barr, Koehler, and Fugelsang’s (2014) moral values scale, which consisted of 6 care/fairness (“individualising”) and 4 traditional (“binding”) moral values (Reference Graham, Nosek, Haidt, Iyer, Koleva and DittoGraham et al., 2011). Participants were asked to rate how important the values were to their moral thinking on a 7-point scale from 1) Irrelevant to 7) Extremely Important. Care/fairness values included being kind, supporting the autonomy of others, being helpful, being fair, avoiding harm, and supporting the rights of others (α = .85). Traditional values included showing respect for traditions, being patriotic and loyal, showing respect for legitimate authority, and being pure by avoiding carnal pleasures and disgusting things (α = .80).
Political ideology.
Participants were asked to indicate their stance on social and economic issues separately on scales from 1) Very liberal, to 3) Moderate, to 5) Very conservative. Following Reference Pennycook and RandPennycook and Rand (2019a), we computed four political categories based on the convergence between social and economic political ideology: 1) Consistent Liberals, who are liberal/very liberal on both social and economic issues, 2) Consistent Conservatives, who are conservative/very conservative on both social and economic issues, 3) Libertarians, who are liberal/very liberal on social issues but conservative/very conservative on economic issues, and 4) Consistent Moderates who are moderate on both social and economic issues. However, because of the liberal political skew of Mechanical Turk, there were only 60 Consistent Conservatives in our sample compared to 153 Consistent Liberals.
Political opinions.
We also surveyed a range of political opinions (see Table 2). Participants were asked to indicate agreement/ disagreement on a 7-point scale from 1) Strongly disagree to 7) Strongly agree. As is evident from Table 2, three of the items did not correlate particularly highly with political ideology (microaggressions, campus free speech [coded so that support of free speech was counted as conservative], and men experiencing sexism). We therefore created a Conservative Opinions scale (α = .81) using all items except for these three (all items were re-scored so that a high score indicated a more conservative opinion). Participants were also asked to indicate their relative trust in the government on a scale from 1) Strongly Distrust to 5) Strongly Trust (this was also uncorrelated with political conservatism, r = -.07).
τ The following note was also presented: Microaggressions are defined as “brief, everyday exchanges that send denigrating messages to certain individuals because of their group membership.”
All corrlations are significant at p < .001 except the last, which is n.s.
Free Market Ideology.
Participants completed a 5-item Free Market Ideology measure (Reference Heath and GiffordHeath & Gifford, 2006). The scale assesses the belief in the powers of the free market (α = .85). It includes items such as: “An economic system based on free markets unrestrained by government interference automatically works best to meet human needs.” Responses were provided on a 7-point scale from 1) Strongly disagree to 7) Strongly agree.
Science beliefs.
We created a science belief scale based on various contemporary scientific issues. In particular, we selected a number of typical science-related beliefs (Table 3): evolution, anthropogenic global warming, big bang theory, old Earth, and stem cell research. We also attempted to use items that have been associated with “liberal” anti-science attitudes (Table 3): resistance to technology (reverse scored), genetically modified organism (GMO) resistance (reverse scored), vaccines as a cause of autism (reverse scored), and belief in modern medicine. However, as is evident from Table 3, political conservatives were more likely to hold the more anti-scientific stance on every single issue – even issues often associated with political liberalism. Nonetheless, consistent with prior research, there was large variability in terms of how strongly conservatism predicted anti-scientific attitudes (Reference Rutjens, Sutton and van der LeeRutjens, Sutton & van der Lee, 2018). Participants responded on a 7-point scale from 1) Strongly disagree to 7) Strongly agree; however, for our primary analysis, all items were scored such that a higher value meant a more pro-science belief. The full scale had good reliability, α = .84. Participants were also asked to indicate their relative trust in scientists on a scale from 1) Strongly Distrust to 5) Strongly Trust.
Demographics.
Participants were given a demographic questionnaire that included the following items: age, gender, and English proficiency. Social and economic political ideology were included in the demographics questionnaire.
2.1.3 Procedure
Participants either completed the AOT-E at the beginning of the survey or at the end (but before demographics). The presentation order did not change the pattern of results and the aggregate results will therefore be reported. Otherwise, the order of the measures was as follows: 1) conspiracist ideation, 2) paranormal belief, 3) moral values, 4) science beliefs, 5) political opinions, 6) free market ideology, 7) theism, and 8) demographics (including political ideology).
2.2 Results and Discussion
As is evident from Table 4, AOT-E was strongly associated with every other primary measure. Individuals who believe that beliefs should change according to evidence (those high in AOT-E) were: a) less likely to believe conspiratorial, paranormal, and religious (and, specifically, theistic) claims, b) less likely to hold traditional moral values but were more likely to adopt care/fairness moral values, c) less conservative in terms both economic and fiscal ideology (including free market ideology) and across a range of specific political opinions, and d) less likely to hold anti-science beliefs. Reference Gignac and SzodoraiGignac and Szodorai (2016) meta-analyzed typical effect sizes across social psychology and found that correlations (r) of .10, .20, and .30 can be considered relatively small, medium, and large, respectively. Using this metric, AOT-E was a remarkably strong predictor of most factors. With the exceptions of care/fairness moral values (r = .26) and the single-item fiscal conservatism (r = −.24), every effect size was above what would be considered large based on empirical norms. The correlation with conservative opinions and pro-science beliefs, in particular (r’s greater than .60), were well above the 95th percentile (r = .45) in terms of effect size norms for individual differences research in psychology (Reference Gignac and SzodoraiGignac & Szodorai, 2016). This overall pattern of result undermines the idea that reasoning is ineffective and is consistent with the general claim that reasoning has major impacts on our beliefs and values.
*** indicates p < .001
** indicates p < .01
* indicates p < .05
To further understand the scope of AOT-E’s predictive validity, we also investigated the extent to which it predicted specific political opinions (Table 5). With respect to political opinions, individuals who indicate being more actively open-minded about evidence held broadly liberal political views. Indeed, AOT-E was less predictive for the items that were less strongly associated with political ideology: whether men experience sexism on par with women (AOT-E was significantly associated with disagreement, r = −.24); whether microaggressions are a serious problem in educational contexts (AOT-E was slightly but non-significantly associated with disagreement, r = .07)Footnote 1; and whether students should be able to block controversial speakers from giving talks at their university (AOT-E was significantly associated with disagreement, r = .25). The items most strongly associated with political conservatism were most strongly negatively associated with AOT-E.
τ Campus free speech was not significantly associated with conservatism (see Table 2). At any rate, a high score indicates opposition to the idea that “students should be able to block controversial speakers from giving talks at their university”.
*** indicates p < .001
** indicates p < .01
* indicates p < .05
The pattern of results for individual science belief items (Table 6) was very clear (and plainly in support of the “reasoning helps” perspective): AOT-E was associated with more agreement with scientists, regardless of whether the issue pertained to agreement with a clear scientific consensus (such as around anthropogenic global warming or the big bang) or a disagreement with an anti-scientific belief (such as that GMO’s are unhealthy or that vaccines cause autism). AOT-E was also positively associated with general trust in scientists.
*** indicates p < .001
** indicates p < .01
* indicates p < .05
3 Study 2
The results of Study 1 indicate that AOT-E is a very strong predictor of a wide range of beliefs and opinions. There are, however, three key issues that the data from Study 1 leave unresolved. The first pertains to the perhaps implausibly large effect sizes that we found in Study 1. A recent paper by Reference Stanovich and ToplakStanovich and Toplak (2019) raised an important point that pertains to the AOT (and that applies to the AOT-E): When asked about “beliefs”, some individuals may assume that the question is really about religious beliefs. Indeed, Stanovich and Toplak found that the extremely high correlation between AOT and religious beliefs can be partially (but not fully) accounted for using “belief revision” items (that is, the same class of items that make up the AOT-E). Of course, this may be partly due to the possibility that AOT-E plays a major role in belief formation (as we have argued). However, it may also be the case that religious believers are particularly opposed to revising their religious beliefs, but less opposed to revising beliefs in general. Consistent with both of these possibilities, Stanovich and Toplak found that items that used slightly different wording (which did not lead to the religious belief presumption) continued to predict religious belief, but not as strongly. In Study 2, we therefore modified the AOT-E to ask about “opinions” instead of “beliefs” (see Table 7). We also changed the wording of an additional item so that there would be an equal number of standard and reverse-coded items. Participants in Study 2 were either administered the original AOT-E or the revised AOT-E. Our goal was to ask whether the results of Study 1 are robust to variations in AOT-E scale wordings even if effect sizes vary somewhat.
A second drawback of Study 1 is that our sample came from Mechanical Turk and is therefore particularly unrepresentative of political conservatives. This is a notable drawback because the association between AOT-E and political opinions may differ depending on whether the individuals are politically liberal or conservative. As such, in our second study, we collected a sample from Lucid for Academics – a source that provides American samples that are nationally representative on age, gender, ethnicity, and geography (based on quota-matching), and that therefore provides a more even and representative split of liberals and conservatives (Reference Coppock and McclellanCoppock & Mcclellan, 2019; Reference Pennycook and RandPennycook & Rand, 2019a).
Third, many of the AOT-E correlates reported in Study 1 have, in previous research, been shown to correlate with performance on the Cognitive Reflection Test (CRT; Reference FrederickFrederick, 2005; Reference Pennycook, Fugelsang and KoehlerPennycook, Fugelsang, et al., 2015a) – a measure intended to assess the broad disposition to think analytically and that also correlates with AOT (Reference Toplak, West and StanovichToplak, West & Stanovich, 2011). Thus, in Study 2 we included the CRT to assess the relative predictive strength of AOT-E relative to CRT.
3.1 Method
3.1.1 Participants
American participants were recruited from Lucid for Academics on April 19th, 2019. We recruited 700 participants, who were randomly assigned to one of two conditions. In total, 751 participants began the study but 60 did not finish. We also removed individuals who responded affirmatively when asked if they responded randomly at any point during the survey (77 from the original AOT-E condition and 76 from the revised AOT-E condition). The resulting sample (N = 539, Mean age = 45.4) consisted of 251 males and 278 females, 1 transgender female, 1 transgender male, 3 trans/non-binary, 4 “not listed”, and 1 who preferred not to answer.
3.1.2 Materials
Measures identical to Study 1.
The following measures were administered as in Study 1: Conspiracist ideation, paranormal belief, God skepticism, moral values, political ideology, political opinions, free market ideology, and trust in scientists. Unlike in Study 1, all of the political opinion items were significantly associated with political ideology (Table 8). We therefore used all of the items to form the political opinions scale (α = .72).
AOT-E.
Participants were either administered the original or the revised AOT-E scale, as outlined in Table 7. Reliability is good for both scales (original: α = .72; revised: α = .74), albeit not as strong as in Study 1. Participants reported being more actively open-minded when asked about opinions (revised scale; M = 65.5, SD = 16.5) than beliefs (original scale; M = 56.7, SD = 17.2), t(537) = 6.07, SE = 1.45, p < .001. Whereas 43% of the sample were at or below the scale midpoint when asked about beliefs, only 20.4% were at or below the scale midpoint when asked about opinions. Thus, although only a minority indicated a resistance to evidence in both conditions, this was more common when asked about beliefs than opinions. This is what would be expected if the conflation of beliefs with religious beliefs was causing some individuals to indicate a resistance to evidence. Alternatively (or in addition), it is possible that people are simply more open to changing opinions (which may be issues of taste/preference) than beliefs (which may refer more to people’s position on issues of apparent fact). At any rate, the revised AOT-E removed the apparent bias against religious individuals (Reference Stanovich and ToplakStanovich & Toplak, 2019).
Cognitive Reflection Test (CRT).
We used a re-worded version (Reference Pennycook and RandPennycook & Rand, 2019b) of the three-item CRT (Reference FrederickFrederick, 2005). The CRT consists of words problems that cue an incorrect intuitive response and that therefore partially index one’s disposition to engage in reflective reasoning (Reference Campitelli and GerransCampitelli & Gerrans, 2014; Reference Pennycook, Cheyne, Koehler and FugelsangPennycook, Cheyne, Koehler & Fugelsang, 2016; Reference Toplak, West and StanovichToplak et al., 2011). The Lucid sample had particularly low accuracy on the CRT (M = .16, SD = .28; i.e., 0.5 out of 3 correct, on average – 70% of the sample got 0 out of 3). As a consequence, reliability was relatively low for the CRT (α = .64).
Religious belief.
In addition to the theism measure used in Study 1, we also included a full religious belief scale (via Reference Pennycook, Cheyne, Koehler and FugelsangPennycook et al., 2016). For this, participants were asked to indicate their degree of belief in the following supernatural religious claims: afterlife, heaven, hell, miracles, angels, demons, soul, devil/Satan, and God. Participants responded on a 5-point scale from 1) Strongly disagree to 5) Strongly agree. The religious belief scale had excellent reliability (α = .95). Unfortunately, there was a significant amount of missing data (N = 90) for the religious belief scale – perhaps because it was the only scale that was administered using a matrix responding format (this was done because our intention was to administer the scales identically as they have been administered in past research).
Science beliefs.
We attempted to expand our science belief questionnaire by adding additional items for which political liberals might be expected to have more anti-scientific stances. Specifically, we asked about the following (in addition to the items from Study 1; see Table 9): the heritability of human intelligence, the role of genetics in success, “detox” therapies, and nuclear power. However, as is evident from Table 9, the only anti-scientific stance that was more common among political liberals was opposition to nuclear power. Nonetheless, unlike Study 1, many of the issues (6 out of 13) did not significantly correlate with political ideology. At any rate, the full scale had acceptable reliability (α = .72).
*** indicates p < .001
* indicates p < .05
Political party.
In additional to the political ideology questions that were administered in Study 1, we also asked participants to indicate which political party they most strongly affiliate with: Democrat, Republican, Independent, Other. The sample was fairly politically balanced: 37% Democrat, 31% Republican, 29% Independent, and 3% “other”. We also asked them who they voted for in the 2016 Presidential Election, about favorability toward Donald Trump, and to indicate the likelihood that they would vote for Trump in the 2020 Presidential Election. These measures, along with social and economic political ideology, were included in the demographics section of the survey.
Demographics.
Participants were given a demographic questionnaire that included the following items: age, gender, English proficiency, education, income, and ethnicity.
3.1.3 Procedure
Participants either completed the AOT-E at the beginning of the survey or at the end (but before CRT and demographics). The presentation order did not change the pattern of results and the aggregate results will therefore be reported. Otherwise, the order of the following measures was randomized for each participant (unlike Study 1, which used a fixed order): 1) conspiracist ideation, 2) paranormal belief, 3) moral values, 4) science beliefs, 5) political opinions and free market ideology, and 6) religious belief and God skepticism. This block of questionnaires was followed by the CRT and, finally, demographics.
3.2 Results and Discussion
As is evident from Table 10 – and again supportive of the “reasoning helps” perspective – both versions of the AOT-E scale were significantly associated with every other primary measure. However, consistent with Reference Stanovich and ToplakStanovich and Toplak (2019), the correlation between the original AOT-E and religious belief (r = .42) was more than double the size of the correlation for the revised AOT-E (r = .20). The revised scale also had decreased correlations with traditional moral values (r’s = −.37 and −.17 for original and revised, respectively) and conservative opinions (r’s = −.55 and −.36 for original and revised, respectively). Nonetheless, as mentioned, the revised AOT-E was a significant predictor in every case – and, based on the norms from Reference Gignac and SzodoraiGignac and Szodorai (2016), most of the correlations were medium (r = .20) to large (r = .30). Moreover, both AOT-E scales were generally more strongly correlated with the measures of interest than was CRT performance. Indeed, every measure was significantly correlated with the revised AOT-E after controlling for CRT performance (all r partial’s > .16, all p’s < .015). Thus, it appears that one’s mere stance toward revising beliefs according to evidence may play a role in what they believe (as adults) – a conclusion that is plainly supportive of the idea that reasoning is largely effective (for some).
*** indicates p < .001
** indicates p < .01
* indicates p < .05
The pattern of correlations for the individual political opinion items was similar to Study 1 (albeit with slightly weaker effect sizes; see Table 11). Both versions of the AOT-E scale were significantly associated with liberal political stances on almost every issue, with two exceptions. The first exception, as in Study 1, was that AOT-E did not correlate with believing that microaggressions are problematic or unproblematic (in Table 11 this is coded such that a higher score indicates believing that microaggressions are unproblematic). The only notable difference between the two versions of the AOT-E (apart from the fact that the correlations tended to be stronger for the original than the revised version) was that a more strongly pro-free speech stance was nominally negatively correlated (r = −.11, p = .070) with the original AOT-E, but significantly positively correlated (r = .19, p = .002) with the revised AOT-E; this correlation was also positive using the original AOT-E in Study 1 (r = .25, p < .001; see Table 5). The latter correlation is notable because, in the Study 2 Lucid sample, conservatives more strongly disagreed that “students should be able to block controversial speakers from giving talks at their university” – a stance that was also associated with higher AOT-E (see also De keersmaecker, Bostyn, Hiel & Roets, 2020, for related results); this correlation in the same direction in the Study 1 MTurk sample (−.08, Table 2) but was not significant. In other words, the campus free speech item is the only case where higher AOT-E is associated with a stance (favoring free speech) that is positively (although modestly) correlated with conservative political ideology (Table 8). All other issues were in the opposite direction (or non-significant, as is the case for the microaggressions item).
*** indicates p < .001
** indicates p < .01
* indicates p < .05
Finally, as with the overall measures, CRT was a weaker (and often non-significant) predictor for every item relative to either AOT-E scale. Combined with Study 1, these results indicate that a major consequence of AOT-E is for political ideology – precisely the domain where motivated reasoning is purported to dominate (but for a more direct test, see Study 3).
The results for the science beliefs questionnaire largely replicated Study 1 (Table 12). That is, every science belief item that was included in both studies – including general trust in scientists – was positively correlated with both versions of the AOT-E (with the exception of the modern medicine item, which was only marginally correlated with the original AOT-E in Study 2, r = .12, p = .060). The results for the new items that were added to Study 2 were more tepid. Although disbelief in the “detoxing the body of chemicals” item was correlated with AOT-E, this was not true for any of the other new items. If anything, having a positive stance on nuclear power (the only item positively correlated, however modestly, with political conservatism; see Table 9) was nominally (but not significantly) negatively associated with the revised AOT-E (r = −.11, p = .087). Nonetheless, 10 out of 13 items (along with general trust in scientists) were correlated with the revised AOT-E in the expected direction (see also; Reference McPhetres and PennycookMcPhetres & Pennycook, 2020). Thus, the results again support the contention that reasoning (on balance) facilitates pro-science judgment.
*** indicates p < .001
** indicates p < .01
* indicates p < .05
4 Study 3
The results of Study 2 largely reinforced what we found in Study 1: Believing that beliefs (or opinions) should change according to evidence was associated with skepticism about conspiratorial, paranormal, and religious claims. Consistent with Reference Stanovich and ToplakStanovich and Toplak (2019), asking about opinions (revised AOT-E) instead of beliefs (original AOT-E) decreased (but did not wholly undermine) the correlation with religious belief – nonetheless, the revised AOT-E continued to significantly predict religious belief. Moreover, the revised AOT-E was just as successful at predicting conspiratorial and paranormal beliefs as the original AOT-E. Furthermore, as in Study 1, AOT-E was positively associated with care/fairness moral values and negatively associated with traditional moral values. Both versions of the AOT-E were also negatively correlated with political conservatism; including political ideology, free market ideology, and a wide range of conservative political opinions. The only exception was that the revised AOT-E was positively associated with support for campus free speech. Although this item was only modestly associated with political conservatism (r = .09), it is noteworthy that this is the sole issue out of the ten surveyed where the more politically conservative stance was associated with the stance that beliefs should change according to evidence (see also Reference De keersmaecker, Bostyn, Hiel and RoetsDe keersmaecker et al., 2020). Both versions of the AOT-E were also predictive of a number of pro-science beliefs (with a few exceptions) (Reference McPhetres and PennycookMcPhetres & Pennycook, 2020). Overall, these results indicate that the AOT-E scale maintains strong predictive validity even if “opinions” are referenced instead of “beliefs”.
Although Study 2 paints a fairly clear picture in the aggregate, it remains unclear if AOT-E is predictive of (in particular) liberal opinions and pro-science beliefs across the political spectrum. Indeed, previous research has shown that cognitive sophistication interacts with political ideology when predicting people’s stance on issues such as global warming (Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and MandelKahan et al., 2012; Reference Kahan, Peters, Dawson and SlovicKahan, Peters, Dawson & Slovic, 2017; Reference Sarathchandra, Navin, Largent and McCrightSarathchandra, Navin, Largent & McCright, 2018). Unfortunately, because we assigned participants to two different AOT-E scales, we did not have enough power in Study 2 to effectively estimate effect sizes when separating Democrats and Republicans.Footnote 2 Given that the original “belief” version of the AOT-E may modestly inflate some estimates of the correlation between AOT-E and a variety of issues (particularly those that have some association with religious belief), we ran a third study employing only the revised “opinion” version of the AOT-E.
4.1 Method
4.1.1 Participants
American participants were recruited from Lucid for Academics on May 9th, 2019. We recruited 1000 participants. In total, 1063 participants began the study but 103 did not finish. We also removed 182 individuals who responded affirmatively when asked if they responded randomly at any point during the survey. The resulting sample (N = 778, Mean age = 43.8) consisted of 363 males and 410 females, 2 transgender males, 2 trans/non-binary, and 1 “not listed”.
4.1.2 Materials and Procedure
The materials and procedure were identical to Study 2, with the following exceptions: 1) Participants were administered only the revised (“opinion”) AOT-E; 2) the religious belief questionnaire (for which there was substantial missing data in Study 2) was changed from a matrix presentation format to the single-question format used for other measures; 3) we also changed the response options for the religious belief questionnaire to be consistent with the paranormal/political/science questionnaires (i.e., a 7-point scale); 4) we added 3 CRT items from Reference Thomson and OppenheimerThomson and Oppenheimer (2016) that are relatively easier, based on past research (see https://osf.io/xqzse/ for full materials); and 5) we added a single continuous measure of Democrat-Republican preference (“Which of the following best describes your political preference?” Strongly Democratic, Democratic, Lean Democratic, Lean Republican, Republican, Strongly Republican), in addition to the party classification item used in Study 2 (which included “independent” as an option).
Scale reliabilities for Democrats, Republicans, and Independents are in Table 13. Reliability was low for the free market ideology scale and (among Republicans and Independents in particular) the conservative opinion and pro-science belief scales. Notably, variability was fairly similar across the major variables for Democrats and Republicans, indicating that restriction of range is not a likely explanation for any divergences that we observe.
4.2 Results and Discussion
Our focus for Study 3 was on the extent to which AOT-E predicted the same constellation of beliefs, values, and opinion for individuals across the political spectrum.Footnote 4 As is evident from Table 14, AOT-E was a strong predictor across the board for Democrats (paralleling the overall results from Studies 1 and 2). However, the same was not equally true for Republicans, for whom AOT-E was a significant predictor of skepticism about paranormal claims, acceptance of care/fairness as moral values, and (notably) pro-science beliefs, but no other issues. Interestingly, in contrast with the overall results, AOT-E was positively associated with economic conservatism among Republicans. Nonetheless, it is noteworthy that Republicans scored themselves lower on AOT-E (M = 61.2, SD = 15) than both Democrats (M = 65.1, SD = 15.9) and Independents (M = 64.9, SD = 16.3), t’s > 2.5, p’s < .015. Moreover, the correlations when considering the full sample (averaging across liberals and conservatives) paralleled the previous two studies: AOT-E was a significant predictor for every measure except economic conservatism. Furthermore, as in Study 2, the CRT results tended to parallel AOT-E despite being a weaker predictor overall.
*** indicates p < .001
** indicates p < .01
* indicates p < .05
Turning now to specific political issues that formed our conservative opinions scale (Table 15), it is evident that the previously identified correlation between AOT-E and liberal political opinions (with one notable exception) was driven largely by Democrats and, in some cases, by Independents. For example, those higher in AOT-E were supportive of same-sex marriage among Democrats (r = −.31) and Independents (r = −.27), but this correlation is marginally significant among Republicans (r = −.12, p = .075). Most importantly, however, there was only a single issue where AOT-E predicted opposite opinions for Democrats and Republicans: Capital punishment. Whereas higher AOT-E was associated with more opposition to capital punishment among Democrats, it was associated with more support for capital punishment among Republicans. Thus, even though AOT-E was not as strongly predictive among Republicans as Democrats, it is clear that it is not merely driving political polarization either (lest more issues would be significantly correlated in opposite directions). Indeed, the opinion that there is room for men in feminism was associated with higher AOT-E for both Democrats and Republicans. Finally, the previously noted exceptional case where the more conservative opinion was, overall, associated with higher AOT-E – support for free speech – was driven by Republicans and Independents (i.e., AOT-E was not associated with support for free speech among Democrats). Thus, whether AOT-E predicts support or opposition for a specific issue appears to depend to some extent on what the issue is and about whom you’re speaking. Nonetheless, AOT-E certainly maintained a great deal of predictive validity (contrary to the “reasoning is helpless” perspective and consistent with the “reasoning is helpful” perspective) and was not associated with political polarization writ large (underming the “reasoning hurts” perspective).
*** indicates p < .001
** indicates p < .01
* indicates p < .05
The results for science-related beliefs (Table 16) parallel the pattern for conservative opinions insofar as they provide evidence against the motivated reasoning (“reasoning hurts”) perspective. Specifically, AOT-E was generally associated with pro-science beliefs across the board for Democrats (with the exceptions of skepticism about detoxing and the role of genetics in success, which were not significant, and supporting nuclear power, which was negatively associated with AOT-E) and Independents (with the exceptions of genetics and supporting GMO’s). Among Republicans, the most politically polarizing issues, such as global warmingFootnote 5, big bang, and evolution (see Table 17), were not associated with AOT-E. However, AOT-E was associated with pro-science stances on several intermediate issues; namely support for stem cells, vaccines, technology, and modern medicine. Furthermore, trust in scientists was positively associated with AOT-E across the political spectrum.
*** indicates p < .001
** indicates p < .01
* indicates p < .05
*** indicates p < .001
** indicates p < .01
5 General Discussion
Although the belief that beliefs (and opinions) ought to change according to evidence is held by most people, there is meaningful variability in the strength of this conviction. The results of all three studies point to one broad, yet important, conclusion: Actively open-minded thinking about evidence (AOT-E) is, in the aggregate, a strong predictor of a wide range of beliefs, opinions, and values. This implies that individual differences in the propensity to reflect about evidence is something that people meaningfully engage in their everyday lives, which indicates that the exercise of human reasoning is, on balance, “helpful”. The respective ideas that reasoning is “helpless” or “hurtful” did not find support in our data.
To summarize, AOT-E was associated with skepticism about conspiratorial, paranormal, and religious claims and agreement with a variety of scientific claims. Thus, AOT-E appears to support the rejection of epistemically suspect beliefs, thereby indicating that what people believe about whether beliefs ought to change (meta-beliefs) has an influence on what they take to be true or false about the world. AOT-E was also consistently associated with political liberalism in a variety of forms (despite being a domain for which motivated reasoning should be prominent); from having a more liberal political ideology, to adoption of more liberal moral values (specifically, rejection of traditional values and agreement with care/fairness values), to opposition to economic conservatism and free-market ideology. Furthermore, AOT-E was positively associated with a variety of specific liberal political opinions (e.g., supporting gay marriage and access to abortion) in the aggregate. This suggests that political conservatives, who tend to be more resistant to societal change (Reference White, Kinney, Danek, Smith and HarbenWhite, Kinney, Danek, Smith & Harben, 2019), may also be more resistant to intrapersonal belief change (but see below for a more nuanced interpretation).
The strong predictive validity of the AOT-E across a wide range of domains suggests that people’s meta-beliefs about whether and how beliefs should change play an important role in belief formation. However, this conclusion comes with an important caveat that is in many ways as interesting as the conclusion itself. Most notably, Study 3 revealed that AOT-E is much more predictive among Democrats than among Republicans (with Independents being intermediate).Footnote 6 This interaction is, in some cases, consistent with previous research that has been used to support the “reasoning hurts” perspective – however, as we will argue, it is not consistent with how some of these past results have been interpreted.
5.1 AOT-E among Democrats and Republicans
The AOT-E did not have the same predictive validity for conservatives as it did for liberals. To take a prominent example from Study 3, AOT-E was very strongly correlated with belief in anthropogenic climate change among Democrats (r = .43, p < .001), but there was no such (significant) correlation among Republicans (r = .09, p = .179). This parallels previous findings wherein individuals who are more cognitively sophisticated (using a variety of measures, including the CRT) are more politically polarized in terms of climate change (Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and MandelKahan et al., 2012). In particular, Kahan et al. found that science literacy and numeracy was positively correlated with climate change risk attitudes among liberals (r = .08, p = .03) but negatively correlated among conservatives (r = −.12, p = .03).Footnote 7 The favored explanation for these results is that cognitive sophistication polarizes climate change (and other) attitudes because it facilitates motivated (“identity-protective”) reasoning (Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and MandelKahan et al., 2012; Reference Kahan, Peters, Dawson and SlovicKahan, Peters, Dawson & Slovic, 2017; Reference Sarathchandra, Navin, Largent and McCrightSarathchandra, Navin, Largent & McCright, 2018) – an account that has notably been applied to the AOT as a measure of general cognitive sophistication as well (Reference Kahan and CorbinKahan & Corbin, 2016; but see Reference BaronBaron, 2017). This “humans-as-lawyers” motivated reasoning perspective has had a large influence on the field and in the popular press (for an overview, see Reference PennycookPennycook, 2018).
Motivated reasoning cannot, however, account for our broad pattern of results (with some potential exceptions). Although an interaction between political ideology and cognitive sophistication in the prediction of an attitude (such as climate change belief) is often taken as positive evidence for the motivated reasoning account, the interaction is easily understood as a consequence of the sample characteristics. For example, the interaction between political ideology and cognitive sophistication in the prediction of climate change attitudes emerges because the sample happens to have both liberals and conservatives. However, an interaction can emerge from opposing effects that are not individually significant. This is important because the central prediction of the motivated reasoning (“reasoning hurts”) account is actually two separate (and opposing) main effects for political liberals and conservatives: Cognitive sophistication should be positively associated with politically congruent attitudes (e.g., climate change for liberals) and negatively associated with politically incongruent attitudes (e.g., climate change for conservatives; see Reference Pennycook and RandPennycook & Rand, 2019b). The results of Study 3 are plainly inconsistent with this prediction: Not only did we not find opposing main effects in the context of climate change (in fact, AOT-E is nominally positively correlated with climate change beliefs among Republicans), we do not find it for any other polarized issue either (with one exception). Specifically, there was not a single scientific issue that we included in our study that produced significant correlations with AOT-E in opposite directions for Democrats and Republicans. Furthermore, across ten explicitly political issues (such as support for police authority or opposition to abortion) there was only one case where the motivated reasoning prediction of significant opposing effects was present: Capital punishment. Even broad ideological positions such as social conservatism and free-market ideology did not produce significant opposing effects (although a second exception is present here: Economic conservatism). Thus, in almost every case, the motivated reasoning (or identity-protective cognition) account’s prediction was not supported. Rather, it appears that AOT-E is simply less predictive among political conservatives than among liberals.
One potential explanation of this is that, despite arguments to the contrary (Reference Kahan and CorbinKahan & Corbin, 2016), actively open-minded thinking about evidence is not merely a proxy for cognitive sophistication (Reference Baron, Scott, Fincher and Emlen MetzBaron et al., 2015) – an observation supported by the divergences between AOT-E and CRT in our own data (with the former being a stronger predictor overall than the latter) as well as the data of Kahan and Corbin (Reference BaronBaron, 2017). Indeed, as intimated in the introduction, AOT-E is definitionally opposed to motivated reasoning: Believing that beliefs ought to change according to evidence essentially amounts to a rejection of motivated reasoning. Of course, it is possible that those who report being more actively open-minded are simply being deceptive (and potentially self-deceptive). That is, individuals who say that they are particularly willing to change their beliefs according to evidence are, in reality, the most likely to do the opposite and engage in motivated reasoning. This seems unlikely. And, at any rate, the results for the CRT – which is plainly a measure of some sort of cognitive sophistication – also do not support the motivated reasoning account. There was not a single specific issue, political or science-based, that produced opposing and significant correlations with CRT for Democrats relative to Republicans. Thus, the most parsimonious take-away from the present data is simply that the motivated reasoning account (a “reasoning hurts” perspective) is wrong or incomplete.
If not motivated reasoning, what then explains the finding that AOT-E is more consistently predictive for liberals than conservatives? It is potentially revealing that the depression in predictive validity for Republicans relative to Democrats was seemingly evident even for measures that did not significantly correlate with conservatism. For example, conspiracy ideation was strongly correlated with AOT-E for Democrats (r = −.32, p < .001) but less so for Republicans (although it was marginally significant, r = −.12, p = .070).Footnote 8 This occurred even though conspiracy belief was equivalent between Democrats and Republicans (t = 1.16, p = .245). One mundane possibility is that data quality was, for whatever reason, poorer among Republicans than Democrats. Contrary to this, scale reliabilities were largely similar for both groups (Table 13). Moreover, random responding was very similar for Republicans and Democrats.
One possibility is that there are important differences between the “coalitions” that make up the Democratic and Republican parties. For example, Reference BaronBaron (2017) noted that the Democratic Party in the United States (and liberals in general) is made up of a more diverse group of people than is the Republican Party. Supporting this idea, variation in most of the primary measures in Studies 2 and 3 (including the AOT-E itself) is at least nominally higher among Democrats than Republicans (see Table S4 in the supplementary materials).
Yet another possibility is that there is no genuine difference between conservatives and liberals in terms of people’s beliefs about how beliefs should change, but the AOT-E items are viewed through a political lens in the similar sort of way that the “belief” items in the original AOT-E were biased against religious individuals (Reference Stanovich and ToplakStanovich & Toplak, 2019). It may be that “evidence” as a term or concept has been politicized to some extent and that AOT-E may be viewed as a liberal outlook (Reference KrugmanKrugman, 2019). Under this account, the weaker correlations among Republicans occurs because some proportion of conservatives are reporting lower AOT-E simply because they are resisting the framing or wording of the questions (or perhaps the source of the questions – for more on insincere responding in the context of partisan bias in surveys, see Reference Bullock and LenzBullock & Lenz, 2019). One counter to this possibility, however, is that the CRT is also less predictive among Republicans. Republicans did no worse on the test than Democrats and presumably are not ideologically opposed to simple-seeming word problems. Although this does not rule out the politicization of evidence possibility, it does render it less likely.
Alternatively, liberals and conservatives (in the USA) may genuinely differ not only in what they believe (including meta-beliefs), but why they believe it. The AOT-E is equipped to assess one’s stance toward evidence, which is apparently important among liberals (insofar as AOT-E distinguishes between what types of beliefs and opinions liberals tend to have – although, of course, other factors are surely important as well). At least based on the present correlational data, belief formation appears to be driven more by other factors for conservatives. That is, it is not simply that conservatives are less willing to change their beliefs according to evidence (although the overall difference is nonetheless evident), but rather that factors unstudied here contribute more substantially to belief formation among conservatives. One of the apparent defining features of conservatism, apart from resistance to change, is the endorsement of hierarchies (e.g., Reference Jost, Glaser, Kruglanski and SullowayJost, Glaser, Kruglanski & Sulloway, 2003). Perhaps part of the reason why AOT-E is less predictive among conservatives, then, is that beliefs are less intrapersonal and more interpersonal among political conservatives. That is, belief is more about social groups and, therefore, variation in beliefs among conservatives is driven more by exposure to different hierarchies and information sources (for an example, see Reference Landrum, Lull, Akin, Hasell and JamiesonLandrum, Lull, Akin, Hasell & Jamieson, 2017). Plainly, further research is required.
5.2 Very large effect sizes: A lesson
In a recent discussion of effect size estimates, Reference Funder and OzerFunder and Ozer (2019) argued that r’s of .10, .20, and .30 correspond to small, medium, and large effect sizes, respectively (see also Reference Gignac and SzodoraiGignac & Szodorai, 2016). They also argued that very large effect sizes (r = .40 or greater) are, in the context of psychological research, “likely to be a gross overestimates rarely found in a large sample or in a replication” (p. 1). In Study 1, AOT-E predicted multiple beliefs and opinions at a level greater than .40 (including aggregate liberal opinions and pro-science beliefs at r’s = ~.60). As a meaningful counter-example to Funder and Ozer, Study 1 consisted of a large sample (N = 375) and was largely replicated with a different sample (using the original AOT-E) in Study 2 (r’s were .55 and .40 for liberal opinions and pro-science beliefs, respectively). Nonetheless, consistent with Funder and Ozer’s larger point, the very large effect sizes in Study 1 may be inflated for two reasons.
First, as argued by Reference Stanovich and ToplakStanovich and Toplak (2019) (who also noted the large effect sizes as a reason for skepticism), the original version of the AOT-E appears to have inflated some effect sizes because individuals may have presumed the questions to be about religious belief in particular instead of beliefs more generally. Although religious believers continue to rate themselves as more resistant to revising opinions according to evidence relative to non-believers, the “belief” wording in the original AOT-E (which was derived from earlier scales) may have inflated the correlation with religious belief and its covariates. However, one alternative possibility is that the belief items are simply more predictive overall (e.g., because they are more easily understood). A more systematic investigation of “belief” versus “opinion” wording is necessary to come to firm conclusions. Of course, the present data indicate that the AOT-E is relatively strongly predictive regardless of these small changes in wording.
Second, as discussed, we found that AOT-E was much more predictive across the board for political liberals (Democrats) than for conservatives (Republicans). This is relevant for the apparently over-estimated correlation effect sizes in Study 1 because that sample came from Mechanical Turk, which was heavily liberal-skewed.Footnote 9 Thus, although Study 1 consisted of a large sample and produced results that were replicated in Study 2, our evidence indicates that Reference Funder and OzerFunder & Ozer’s (2019) conclusion about very large effect sizes being likely overestimates is nonetheless accurate. However, in this case, the issue was more a matter of generalizability than replicability. The underlying lesson is the same: Very large effect sizes should be interpreted with caution.
5.3 Limitations
The principal limitation of the present study is that it is correlational and therefore not possible to establish, for example, whether AOT-E affects political opinions, vice versa, and/or some third factor affects both. Nonetheless, experimentally manipulating AOT-E and testing for a change in beliefs does not seem a prudent approach. Beliefs, opinions, and values are formed across years, and minute-long manipulations do not offer a reasonable proxy for the psychological processes that are of chief interest here. Rather, longitudinal studies that establish differences in AOT-E at adolescence and test for changes in beliefs over time would be a more fruitful future direction.
The generalizability of this study is limited in a number of ways. First, our samples are not precisely representative of the United States population (although Lucid is much closer than Mechanical Turk). Second, we obviously cannot generalize beyond the USA. Third, although we attempted to test as many different beliefs, values, and opinions as possible in a single survey, we have surely missed many important issues. Furthermore, it is possible that our own selection of issues was subject to our own liberal political bias.
6 Conclusion
Our 8-item actively open-minded thinking about evidence (AOT-E) scale was strongly predictive of a wide range of beliefs, values, and opinions. People who reported believing that beliefs and opinions should change according to evidence were less likely to be religious, less likely to hold paranormal and conspiratorial beliefs, more likely to believe in a variety of scientific claims, and were more political liberal (in terms of overall ideology, partisan affiliation, moral values, and a variety of specific political opinions). Moreover, the effect sizes for these correlations was often large or very large, based on established norms (Reference Funder and OzerFunder & Ozer, 2019; Reference Gignac and SzodoraiGignac & Szodorai, 2016). The size and diversity of AOT-E correlates strongly supports one major, if broad, conclusion: Socio-cognitive theories of belief (both specific and general) should take into account what people believe about when and how beliefs and opinions should change (i.e., meta-beliefs). That is, we should not assume that evidence is equally important for everyone. Furthermore, our findings provide clear support for the perspective that reasoning facilitates sound judgment, thereby undermining the idea that intuition commonly dominates reasoning. We also found essentially no support for motivated reasoning. Regardless future work is required to more clearly delineate why AOT-E is more predictive for political liberals than conservatives.