1. Introduction
Some people believe that vaccines can cause autism, even though “vaccinations are not associated with the development of autism or autism spectrum disorder” (Taylor et al. Reference Taylor, Swerdfeger and Eslick2014: 1).Footnote 1 Here, we have two competing claims. The first is that vaccines do not cause autism. This claim represents what Coady calls the official story (Reference Coady2003). In short, the official story is the prevailing theory of the day. The second claim is that vaccines do cause autism. This is a contrarian theory – i.e., a theory that conflicts with the official story (ibid.).Footnote 2 Contrarian beliefs are widespread.Footnote 3 In this paper, my goal is to explore one explanation as to why people come to endorse contrarian theories. That is, there are epistemic manipulators (henceforth, manipulators) who trick epistemic agents into holding contrarian theories for personal gain.Footnote 4 I will explain one mechanism that manipulators use to trick epistemic agents – and some consequences of that mechanism. I call this mechanism the illusion of epistemic trustworthiness. I explain this illusion by looking at an influential model of trust. The model identifies several important factors that people look for when building trust relations. I go on to explain how manipulators can fabricate the appearance of each of these factors, and thus build the illusion that they are trustworthy sources of information. Manipulators can use this illusion to get epistemic agents to believe contrarian theories for personal gain. Additionally, I will argue that once an epistemic agent is tricked into viewing a manipulator as epistemically trustworthy, standard practices such as fact-checking will become at best ineffective and at worst harmful. I will suggest that instead of fact-checking we should engage in the practice of trust undercutting.
2. Three possible explanations for contrarian belief
We all know someone who has come to believe a contrarian theory rather than the official story. But why do people believe contrarian theories? There are multiple possible explanations. The primary explanation given in the literature is that people who endorse contrarian theories possess epistemic vices. Epistemic vices can be understood as problematic character traits “such as close-mindedness, gullibility, active ignorance, and cynicism” that make it difficult to fruitfully engage in epistemic work (Nguyen Reference Nguyen2021: 5). For example, Cassam (Reference Cassam2016) gives the case of Oliver – an epistemic agent who believes that 9/11 was an inside job. Cassam says the best explanation for Oliver's belief is that he has epistemic vices which lead him to endorse this contrarian view (for more on epistemic vice, see Kidd et al. Reference Kidd, Cassam and Battaly2020; Swank Reference Swank and Axtell2000).
I agree that sometimes epistemic vice explains why an epistemic agent comes to endorse a contrarian theory. However, there are other cases where it does not. Perhaps counterintuitively, it has been found that some contrarians have better epistemic practices and more true beliefs (related to the subject of their contrarian theory) than many people who believe the official story. For example, Lee et al. (Reference Lee, Yang, Inchoco, Jones and Satyanarayan2021) found that contrarian theorists hold workshops about how to gather and evaluate raw figures; Klein et al. (Reference Klein, Clutton and Dunn2019) found that contrarian theorists are highly interested in gathering and analyzing evidence; Harris (Reference Harris2018) observed that contrarian theorists seem to search out and evaluate evidence more often than people who accept the official story; and Kahan (Reference Kahan2015) found that people who deny that climate change is a significant problem are often more aware of how climate change works than people who view it as a significant problem. In short, while we tend to think of contrarian theorists as crackpots living in bunkers wearing tinfoil hats, many of them are intelligent people with generally good epistemic practices. They gather information (both good and bad) and try their best to think critically about it.Footnote 5
An alternative explanation as to why people believe contrarian theories is that they make epistemic mistakes. After all, doing your own research is epistemically risky (Levy Reference Levy2022). People may be bright and have access to good information, but fall short of the epistemic excellence required to solve some particular problem. It can take teams of trained experts to answer even one small part of a complex question. For example, the PBS documentary “King Arthur's Lost Kingdom” explores what the Dark Ages looked like in the UK. Answering the question involved multiple teams of archeologists, DNA mapping, literary scholars, and high-energy physics machines. Plenty of intelligent people with generally good epistemic practices wouldn't be able to gather and process all of that information without making any wrong turns – especially without proper expertise.Footnote 6 These wrong turns can lead to endorsing contrarian theories. Thus, you can end up endorsing a contrarian theory simply because you made an epistemic mistake.
A third, somewhat darker explanation is that there are manipulators who guide epistemic agents into contrarianism for personal gain. In The Merchants of Doubt, Oreskes and Conway (Reference Oreskes and Conway2010) explored one way that manipulators can get people to reject the official story. That is, manipulators can manufacture evidence that functions to undercut the official story.Footnote 7 In this paper, I examine an additional method of manipulation which I call the illusion of epistemic trustworthiness. Rather than undercutting existing evidence, this strategy is used by manipulators to become trusted sources of information. The idea that people can use trust as a tool for manipulation is not new. In The Prince, Machiavelli suggests that a good prince should display an appropriate amount of trust both to prevent the prince from becoming imprudent and also to not allow “excessive distrust to render him insufferable” (ibid.: 271). Thus, a prince can get others to like him, at least in part, by displaying the appropriate amount of trust toward others. More recently, empirical work has shown that trust can be abused for manipulative purposes (see Forster et al. Reference Forster, Mauleon and Vannetelbosch2016Footnote 8; Williams and Muir Reference Williams and Muir2019Footnote 9). Additionally, Nguyen has also argued that trust is an important factor in the creation and maintenance of echo chambers (Reference Nguyen2020); and elsewhere, that a sense of clarity acts as an epistemic litmus test for figuring out when we ought to terminate inquiry – a fact that manipulators can exploit to build trust (Reference Nguyen2021, Reference Nguyen2023).Footnote 10 I think each of these manipulation tactics exists.Footnote 11 They can be performed together or independently. But, importantly, if someone is manipulated by the illusion of epistemic trustworthiness it will impact viable strategies for convincing them to believe the official story.
I take epistemic vice, epistemic mistakes, and epistemic manipulation to each explain the existence of some set of contrarian beliefs. Additionally, I don't take these explanations to be mutually exclusive. Someone is more likely to make epistemic mistakes during research if they have epistemic vices. And a manipulator will likely have an easier time manipulating people who have epistemic vices. Thus these explanations can and will interplay in the real world. However, it is also true that you can make epistemic mistakes during research without epistemic vice playing a large role. Similarly, it is possible to be duped by manipulators without being particularly prone to epistemic vice. Thus, I find it important that the manipulation tactics I describe here don't require the presence of epistemic vice. For that reason, I will focus on how manipulators dupe what I will call competent epistemic agents. These agents need not be perfect reasoners (who among us is?). They need not even be particularly good reasoners. Rather, they merely need to lack epistemic flaws that are so dramatic we would call those flaws epistemic vices.Footnote 12 In that spirit, the remainder of this paper can be seen as explaining how manipulators can trick competent epistemic agents into endorsing contrarian theories by manufacturing the illusion of epistemic trustworthiness.
3. Epistemic litmus tests and trust
Epistemic agents are persons to whom (1) we can “ascribe knowledge and other epistemic states (such as justified or rational belief)” and (2) who play some role “in acquiring, processing, storing, transmitting, and assessing knowledge” (Goldberg Reference Goldberg2021: 19). All actual epistemic agents are limited – both practically and cognitively. We are often not able to dedicate all of our time and attention fully to our epistemic goals. And even if we have that luxury in some cases, no one person could know everything there is to know. There is just too much information out there. In short, we face what Milligram calls the problem of hyper-specialization (Reference Millgram2015: 2 and 27–44). That is, there is too much difficult epistemic work to do, so we must trust other epistemic agents to do some of that work for us (ibid.). Because of this, we often need to outsource some of our epistemic labor (see Levy Reference Levy2022). We rely on others to gather and assess evidence, store and distribute that evidence, and so on. Thus, a proper assessment of many of our beliefs will include an epistemic assessment of the people we choose to become epistemically dependent upon (Goldberg Reference Goldberg2021: 20).Footnote 13
So, we cannot dedicate all of our time and resources to solving our epistemic problems. We need the help of others. However, we also cannot spend all of our time and attention figuring out which other people can help us out epistemically. Thus, we rely on heuristics to figure out who we can offload some of our epistemic labor to. In other words, we use an epistemic litmus test.Footnote 14 Typically, epistemic litmus tests are easy and indicative tests for determining whether a belief is true or not. But these litmus tests can serve other epistemic functions as well. For example, according to empirical research, we are more likely to accept an idea as true if it is easy for us to understand (Kahneman Reference Kahneman2011: chapter 5; Oppenheimer Reference Oppenheimer2008). Drawing on this phenomenon, Nguyen has argued that a sense of clarity acts as an epistemic litmus test for figuring out when we ought to terminate inquiry (Reference Nguyen2021: 13). The question I am investigating here is related but importantly different. That is, Nguyen is exploring which things act as litmus tests for terminating inquiry (and how that litmus test can be exploited) (ibid.; Reference Nguyen2023). I am discussing an epistemic litmus test for setting up epistemic dependency relations – and describing how this litmus test can be tricked by manipulators. Thus, here we need a test for whether we can responsibly outsource epistemic labor to someone else.
To figure out what a litmus test for epistemic dependency relations would look like, it will be helpful to look at how people set up dependency relations more generally. When people go about their lives they often need to rely on other people to do things for them. We rely on farmers to provide grocery stores with food; we rely on grocery employees to make that food accessible to us; and so on. These are dependency relations. Sometimes how society is set up dictates who we offload labor to (e.g., we don't typically seek out people we can rely on to stock our grocery shelves, we let companies do that for us). Other times, we need to figure out who we can responsibly offload labor to for ourselves (e.g., if I were a manager at a store, I would have the responsibility of finding people who could be relied on to stock shelves). To do this, we look for cues of trustworthiness.
Trust is usually described as a fundamentally three-place relation (Baier Reference Baier1986; Hawley Reference Hawley2014; Hieronymi Reference Hieronymi2008; Holton Reference Holton1994; Jones Reference Jones1996). There is a trustor (the agent who is doing the trusting), a trustee (the agent who is being trusted), and something the trustor is entrusting to the trustee.Footnote 15 To be trustworthy is a normative status of trustees in relation to trustors. Here, I am not interested in the normative question of what makes someone trustworthy.Footnote 16 Instead, I am interested in the descriptive question of how people assess trustworthiness (and later, how that mechanism can be tricked). In an influential analysis of the empirical literature on trust, Mayer et al. found that a few key factors influence whether a trustor will decide to trust a trustee with something (Reference Mayer, Davis and Schoorman1995: 717).Footnote 17 First of all, as Deutsch (Reference Deutsch1958) made clear, risk is a central feature of trust. People only need to trust each other when there is some level of risk involved. To see why, imagine the following: you are trying to invest for retirement. Now, think of a world in which there is one clear best investment plan, and all investors always recommend that plan. In this world, there is no reason for you to figure out which investors are trustworthy. That is because there is no risk of getting bad investment advice. No matter who you go to you will get the best investment advice possible. In the real world, however, we need to figure out who is trustworthy because there are often risks associated with either trusting the wrong people or failing to trust at all.Footnote 18 Thus, trust is only needed when risk is present.
Second, potential trustors have different inherent propensities to trust (Mayer et al. Reference Mayer, Davis and Schoorman1995: 715). “Propensity will influence how much trust [a trustor] has for a particular trustee before data on that particular party being available” (ibid.: 715). For example, consider the characters Ted Lasso (from the TV show, Ted Lasso) and Lord Voldemort (from the Harry Potter series):
High Propensity for Trust: Ted Lasso is an American football coach who gets hired to coach a European football (soccer) team. It is strange for someone who barely knows anything about European football to be given a coaching job at the highest level. But Ted doesn't consider whether the job offer was given for devious reasons or not. He simply trusts that he was given the job for good reasons and moves to the UK.
Low Propensity for Trust: Lord Voldemort is a powerful evil villain. He is extremely paranoid that someone will try to take his power away. Because of this, he creates a complicated web of safeguards and never tells anyone the full extent of those safeguards. Additionally, he never fully trusts anyone. He always accompanies requests to complete tasks or keep secrets with threats of injury or death upon failure.
These examples show that – before a trustor has any information about a trustee – certain possible trustors will have a natural tendency to trust others. They are naturally trusting. Other possible trustors will only very reluctantly (if ever) place their trust in anyone. They are naturally suspicious. These are two extremes on a spectrum of propensity to trust.
And finally, there are three important factors that trustors look for when trying to determine whether a trustee is trustworthy. These are ability, benevolence, and integrity (ibid.: 717–24). Ability is understood as “that group of skills, competencies, and characteristics that enable a party to have influence within some specific domain” (ibid.: 717). As understood in the context of figuring out who to trust, trustors will look for evidence that the trustee can perform the task entrusted to them. For example, if you needed someone to watch your child for the evening, then you would want evidence that the babysitter could perform that task. Thus, you would likely look for references, experience, and so on. The second factor, benevolence, is understood as “the extent to which a trustee is believed to want to do good to the trustor” (ibid.: 718). Benevolence is important because it would be extremely risky to place your trust in someone who wished you harm. If you were holding onto a rope and needed to rely on someone to pull you up, it would be wise to trust the task to someone who wanted you to survive rather than someone who wanted you dead. Thus, when figuring out who to trust, trustors look for evidence that the trustee is benevolent toward them. Third, integrity can be understood as “the trustor's perception that the trustee adheres to a set of principles that the trustor finds acceptable” (ibid.: 719). To better understand why integrity is important for developing trust, consider the following situation:
Water Case: You find yourself mildly thirsty. You would like a glass of water but have no way of getting it yourself. There are two people you could trust to get you water, Jenna and Steve. Jenna is willing to walk into town, spend some money, and bring you a bottle of water. Steve, on the other hand, will rob some nearby hapless tourists and bring you their water.
Presumably, you would likely trust Jenna to bring you the water rather than Steve. But why? Both can bring you water and both are benevolent toward you. You could successfully get water by trusting either party. Simply put, it matters to us that we offload labor to people who have moral sensibilities we are okay with. For one thing, we care that the task we are entrusting to someone else is completed morally. But more generally than that, it seems safer to trust people who we know will act according to certain moral norms. There is less chance of mishap.
How does this all connect to our epistemic litmus test? In short, when we need to offload epistemic labor the process will be similar to other cases of offloading. The presence of risk will kickstart our search for someone we can trust to complete the epistemic task for us. Some people (those with a high propensity for trust) will trust others before gathering much evidence of trustworthiness. In many cases, we may call this group of people gullible – and thus if these people come to believe contrarian theories, then the belief can be explained by an epistemic vice.Footnote 19 Other people – those I am calling competent epistemic agents – will look for evidence of trustworthiness. That is, they will look for evidence that the trustee has epistemic ability, is benevolent toward them, and shares a sense of moral integrity with them.Footnote 20 This is where the manipulator will spring their trap.
4. The illusion of epistemic trustworthiness
Much like ordinary epistemic agents, manipulators trade in the acquisition, processing, storage, transmission, and assessment of information.Footnote 21 But epistemic agents try to trade in justified beliefs and knowledge, however, manipulators don't care whether epistemic goods get promoted or not – as long as they profit from the results. In this section, I will explore the nature of one method manipulators use to build trust. I call it, the illusion of epistemic trustworthiness. It functions through two mechanisms: (1) guiding audience research in polluted epistemic environments to seemingly validate sensational or desirable assertions and (2) signaling the possession of certain character traits (related to intelligence, benevolence, and integrity) and credentials. This manipulation begins when a competent epistemic agent is looking for a trustworthy person to help them solve an epistemic problem.
The first step for the manipulator will be to emphasize that it is risky to not listen to what they have to say. This can be done either implicitly or explicitly, and it can be positively framed or negatively framed. For example, cults will often promise potential members happiness, wealth, knowledge, eternal salvation, or some other set of goods. The risk of not joining, in such cases, is missing out on the promised goods. Other manipulators – like Alex Jones – claim that there are dangerous forces in the world. Manipulators will promise to keep you aware of the danger, or even give you knowledge that can save you from it.Footnote 22 These are not mutually exclusive ways of indicating risk. But whichever way is employed will serve a similar function. That is, by suggesting that not taking some epistemic problem seriously is risky, the manipulator is giving competent epistemic agents a reason to look for an epistemic trustee. Here, not taking the problem seriously includes both disbelieving or suspending judgment about the risk. The manipulator will make their audience think that taking either of these routes is foolhardy.
Next, the manipulator will broadcast the notion that they are a potential epistemic trustee. This will likely involve the manipulator holding up truth-finding or rationality as their ultimate goal. Of course, genuine epistemic agents and communities do this as well. However, manipulators tend to mimic and over-emphasize this behavior. For example, conspiracy theorist Alex Jones’ secondary news site NewsWars at one point had the tagline “Breaking News and Information: a strong bias for telling the truth.” Similarly, cult leader Keith Raniere used a “tool” which he called “Rational Inquiry” to brainwash group members.Footnote 23 This emphasis on truth and rationality frames the manipulator's goals as being epistemic. In addition, the manipulator might explain why they – and (often) they alone – have the tools required to answer the epistemic questions at hand.Footnote 24 Surely this won't by itself be enough to dupe a competent epistemic agent. But it might coax them into viewing the manipulator as a possible epistemic trustee. This is when a competent epistemic agent would begin to look for evidence of trustworthiness – i.e., evidence of epistemic ability, benevolence, and moral integrity.
Let us begin with evidence of epistemic ability. Given that we are exploring what competent epistemic agents would do, I will assume that the epistemic agents involved will look for the evidence they ought to look for. Goldman and O’Connor (Reference Goldman, O'Connor and Zalta2021) point us toward four possible types of evidence that one ought to look for to assess epistemic ability. These include trying to see whether an agent's claims cohere with the claims of other trusted sources (coherence), directly verifying an agent's claims (verifiability), identifying whether an agent seems generally knowledgeable (intelligence), and identifying relevant credentials (credentials).Footnote 25 Each of these counts as a type of evidence that someone has epistemic ability. Epistemic agents will assign different weights to the types of evidence that seem important in different cases.Footnote 26 For example, if you are trying to get directions to the nearest coffee shop then it would be sufficient to identify someone who seems knowledgeable about the surrounding area. If you are trying to determine whether climate change is happening, on the other hand, then you ought to seek out someone with the relevant credentials.
So, competent epistemic agents will look for verifiability, coherence, intelligence, and credentials as types of evidence pointing toward epistemic ability. Importantly, just because these agents look for the types of evidence they ought to doesn't mean they will evaluate that evidence perfectly. We are talking about competent agents, not ideal ones. And manipulators will seek to take advantage of that fact. To do so, manipulators will set themselves up in polluted epistemic environments.Footnote 27 Levy (Reference Levy2021) suggests that polluted epistemic environments are disadvantageous places to engage in epistemic work because they are full of misinformation. Misinformation is best understood simply as false information. As McBrayer puts it, “sometimes, misinformation is the result of bad actors (as in the case of propaganda), sometimes it's the result of negligence (like homemade coronavirus cures)” (Reference Levy2021: 3).Footnote 28 Setting themselves up in polluted epistemic environments gives manipulators two advantages. First, it will be harder for sincere epistemic agents to sort the good evidence from the bad evidence. Second, it will be easier for the manipulator to manufacture the illusion that they have epistemic ability. This is because the manipulators can guide audience research in those environments toward evidence – either planted or intentionally selected – that seemingly validates sensational or desirable assertions.
To see how, consider the example of Alex Jones. Jones has largely created his own polluted epistemic environment. He runs a radio show called InfoWars which invites questionable but (often) well-credentialed guests on to discuss controversial topics. He also runs other alternative news sites, such as NewsWars.com. Thus, he can always direct the audience to sources of information that he himself generates. In addition, Jones has surrounded himself with dubious sources run by people who support his project. These sources include ThePeoplesVoice.tv, NewsPunch.com, NaturalNews.com, NoMoreFakeNews.com, and so on. I have chosen two examples where Jones quite clearly displayed the technique of directing audience research within a polluted environment.
Alex Jones Gay Bombs: In 2015, Jones claimed that the U.S. government has used bombs that turn people gay “on our troops, in Vietnam… and in Iraq.” He also claimed that “[the U.S. government sprayed PCP on the troops” and that “they give the troops special vaccines that are really nano-tech that already reengineer their brains.” However, Jones put particular emphasis on the “gay bombs,” saying “if you're a new listener just type in ‘pentagon tested gay bomb’” (bold added for emphasis). (InfoWars, October 16, 2015)
Alex Jones World Economic Forum: On August 19th, 2022 Jones claimed that the World Economic Forum had hired over 110,000 information warriors to control the online narrative and take down InfoWars. After introducing the story, Jones directed the listener to research the issue for themselves. In this case, he told people both to look at normally trusted sources, but also to check out a YouTube video by a channel called ThePeoplesVoice, and an article run by NewsPunch.com.Footnote 29
How do these stories make newcomers think Jones has epistemic ability? Well, someone may do as he recommends and follow up by fact-checking him. In other words, they will engage in what Levy calls shallow research (Reference Levy2022). “Shallow research consists in the consultation of sources we have good reason to regard as reliable and which are aimed at non-experts like us. We engage in shallow research by reading mainstream media, trade books and the like, attending public lectures and so on” (ibid.: 6). This will likely manifest as the new listener searching “Pentagon tested gay bomb” and “WEF information warriors” on a web browser. If they were to do so, they would see articles in the Guardian, the British Medical Journal, the New Scientist, and the BBC all confirming that the Pentagon did try to create a bomb that would turn enemy soldiers gay.Footnote 30 And they would see the World Economic Forum (WEF) did have an initiative to combat misinformation on the internet.Footnote 31 These examples show a manipulator using a polluted epistemic environment to establish coherence. That is, Jones' claims seemingly cohere with other independent sources turned up by their shallow research. I say “seemingly” because the exact claims – gay bombs were developed and used; the WEF hired misinformation warriors to take Jones down – do not cohere, but some nearby claims do. This seeming coherence might get a newcomer to InfoWars to walk away with some evidence that Jones has epistemic ability. Additionally, the second example also shows the manipulator beginning to expand their audience's network of trusted sources to include those run by the manipulator himself (and his associates).
Coherence is just one type of evidence that people look for to establish epistemic ability. Another type of evidence is verifiability. To see how manipulators can use polluted epistemic environments to establish verifiability, consider the following example:
Flat Earth Society: The Flat Earth Society has forums dedicated to helping people work out various problems on their own. One of these forums includes some mathematical calculations meant to show that if the earth were round, then you would be unable to take a photograph of Chicago from the coast of Michigan. The forum explains how to work through the math, and then they show a photograph taken of Chicago from the coast of Michigan. Thus, they seem to show that the earth isn't round.
This example shows how in polluted epistemic environments, manipulators can create puzzles for epistemic agents to work through and verify for themselves. In this case, the environment is polluted in ways that get people to endorse the following conditional: if the earth were round, then you would be unable to take a photograph of Chicago from the coast of Michigan. People are then asked (and sometimes taught how to in workshops) to work through the problem themselves. Upon verifying the math, the agent can then verify that the photograph is possible – either by accepting the photograph provided or going to take one themselves. This example shows how manipulators can create puzzles and plant evidence in ways that make people feel as though they are independently verifying answers on their own. This, in turn, acts as evidence for the epistemic agent that the manipulator has epistemically ability. Manipulators are often in a better position than traditional media to capitalize on verifiability as evidence of epistemic ability. This is because listening to the media is often not an interactive enterprise, but manipulation can be.
In both the Alex Jones and Flat Earth Society examples, the manipulators directed audience research in polluted epistemic environments to create evidence of epistemic ability (i.e., coherence and verifiability). An additional similarity between these cases is that the claims being made were sensational (the claims could also have been desirable). This might initially seem bad for the manipulator, as it has the potential to drive people away. But the sensational nature of these claims is features of the manipulator's strategy, not bugs. Here are two reasons to think this. First, people tend to find novel, complex, and comprehensible claims interesting (Silvia Reference Silvia2005). Additionally, interest motivates people to engage further with the interesting claims (ibid.). The sensational claims made by manipulators are certainly novel, complex, and comprehensible. Thus, these claims will likely grab the interest of epistemic agents and thereby motivate them to engage with the manipulator more in the future. Second, people are more likely to remember when someone gets something shocking correct (e.g., learning the USA attempted to develop gay bombs) than when they get trivial things correct (e.g., the weather report from three days ago). Thus, getting shocking claims correct is more likely to get an epistemic agent to remember the manipulator as a possibly trustworthy source of information. Thus, a manipulator can build evidence of epistemic ability by guiding audience research in polluted epistemic environments to seemingly validate sensational or desirable assertions. This strategy, deployed over and over, has resulted in people saying things like “we all know that [Jones has messed] some things up, right? But [Jones has] gotten so many things right” (Washington Post quoting Joe Rogan). In other words, some people have come to view Jones as an epistemic source worth listening to.
In addition to these techniques, manipulators will signal to their audience that they are intelligent and properly credentialed. Evidence of intelligence can be built in many ways. One way is intellectual virtue signaling (Levy Reference Levy2023). Intellectual virtues are character traits that are helpful to possess when doing epistemic work (Roberts and Wood Reference Roberts and Wood2007). But, as Levy points out, intellectual virtue signaling is often not about actually possessing intellectual virtues (Reference Levy2023). Rather, it is about signaling “characteristics that other people will value” (ibid.: 311). For example, “it is conceivable that some individuals might attract attention by signaling intellectual virtues like empathy and humility,” but “in practice these virtues rarely do well” at getting others to view you as intelligent (ibid.: 315). The intellectual virtues that manipulators are likely to signal include quickness of mind (e.g., the ability to use language well, speak coherently about any topic, and so on), intellectual autonomy (e.g., the ability to think for oneself and not rely on others), and intellectual courage (e.g., the willingness to offer contrarian views) (ibid.: 316).Footnote 32 Other techniques include manipulators promoting each other as intellectual exemplarsFootnote 33; using traditional signals of intelligence like using classical music, dressing like stereotypical intellectualsFootnote 34; and so on. Competent epistemic agents will see these signals of intelligence as evidence of epistemic ability.
Finally, manipulators will also signal to their audience that they have the credentials necessary to answer some set of questions. Here, two strategies can be employed. The first is to inflate the credentials that the manipulator does possess. To see how this could be done, consider the following example:
Credential Inflation: Robert Malone is an M.D. who worked on mRNA vaccine technology during his early career. Malone has since claimed to have invented mRNA technology and has used that badge to promote vaccine skepticism on platforms like the Joe Rogan show. This claim is contrary to other experts in the field, such as Rein Verbeke. Verbeke spoke to the Atlantic about these claims, stating Malone and his co-authors “sparked for the first time the hope that mRNA could have potential as a new drug class” but “the achievement of the mRNA vaccines of today is the accomplishment of a lot of collaborative efforts.”Footnote 35
In this example, Malone has inflated his already impressive credentials to seem better than the normal experts in academia and government.Footnote 36 Competent epistemic agents could look up Malone, see that he was involved in mRNA technology, and believe these inflated claims. This would count as evidence that Malone has the epistemic ability required to make definitive claims on COVID-19 vaccines.
The second strategy is to downplay the need for credentials to answer some set of questions. For example:
Credential Deflation: the first rule of the forum website called the Ornery American – run by the author Orson Scott Card – states “we aren't impressed by your credentials, Dr. This or Senator That. We aren't going to take your word for it, we're going to think it through for ourselves.”Footnote 37
This credential deflation signals to an audience that credentials won't help with solving the important epistemic problems at hand. This may not work for people who place a high premium on expertise, but it could work perfectly well on merely competent epistemic agents. Thus, manipulators can get competent epistemic agents to accept that the manipulator has the proper credentials to answer some questions by either inflating their credentials or deflating the need for credentials at all. In either case, setting expectations around credentials and then meeting those expectations could look like evidence of epistemic ability to a competent epistemic agent.Footnote 38
In addition to evidence of epistemic ability, epistemic agents will also look for evidence of benevolence and integrity before placing their trust in a manipulator. Thus, manipulators will also employ techniques to make themselves look benevolent toward their audience. Displays of benevolence can take different forms. One common technique is called love bombing. Love bombing occurs when a manipulator makes exaggerated displays of attention and affection. These overt displays are used to make the victim feel a loving connection quickly. For example, Tourish and Vatcha (Reference Tourish and Vatcha2005) say of love bombing (in the context of cults):
Love Bombing: “One of the most commonly cited cult recruitment techniques is generally known as ‘love bombing’ (Hassan, Reference Hassan1988). Prospective recruits are showered with attention, which expands to affection and then often grows into a plausible simulation of love. This is the courtship phase of the recruitment ritual. The leader wishes to seduce the new recruit into the organization's embrace, slowly habituating them to its strange rituals and complex belief systems.” (Tourish and Vatcha Reference Tourish and Vatcha2005: 17)
Manipulators in our epistemic sense will use love bombing to gain and maintain an audience. Common tropes in the epistemic domain include manipulators saying that their audience is more intelligent than other people, or that they are the only ones who can see the truth. In problematic cases, this love bombing will then shift into abusive and controlling behavior. This can include any range of behaviors – e.g., financial or sexual exploitation, abuse, neglect, misleading for personal or political gain, and so on. Often in epistemic settings, the abuse involves financial exploitation without returning anything of genuine epistemic value. A second common technique for broadcasting benevolence involves the manipulator signaling to their audience that they (the manipulator) are actively working to protect them (the audience) from harm. This technique is used explicitly by Alex Jones, who often declares that he is fighting a war of information to protect his audience. By using techniques like these, manipulators can project an image of benevolence toward their audiences.
Finally, manipulators will employ techniques to make themselves appear to possess moral integrity. One strategy for doing this involves the manipulator broadcasting a set of moral sensibilities that they think their audience will agree with. Thus, you will see manipulators taking on populist interests and aligning themselves with political groups they think will appeal to an audience. For example, Alex Jones often makes partisan political statements such as emphasizing his love of the Second Amendment. An additional technique involves manipulators telling stories that paint themselves as possessing character traits that are associated with moral integrity. People are wired to identify moral character traits in others (see Uhlmann et al. Reference Uhlmann, Pizarro and Diermeier2015). We assess whether others have character traits by examining their actions (ibid.). Some actions are more communicative than others (ibid.).
Acts That Show Integrity: “More generally, acts that can be attributed to multiple plausible motives or causes (i.e., are high in attributional ambiguity; Snyder, Kleck, Strenta, & Mentzer, Reference Snyder, Kleck, Strenta and Mentzer1979) tend to be seen as low in informational value. In contrast, behaviors that are statistically rare or otherwise extreme are perceived as highly informative about character traits (Ditto & Jemmott, Reference Ditto and Jemmott1989; Fiske, Reference Fiske1980; Kelley, Reference Kelley1967; McKenzie & Mikkelsen, Reference McKenzie and Mikkelsen2007). In addition, decisions that are taken quickly and easily (Critcher, Inbar, & Pizarro, Reference Critcher, Inbar and Pizarro2013; Tetlock, Kristel, Elson, Green, & Lerner, Reference Tetlock, Kristel, Elson, Green and Lerner2000; Verplaetse, Vanneste, & Braeckman, Reference Verplaetse, Vanneste and Braeckman2007), that are accompanied by genuine emotions (Trivers, Reference Trivers1971), and that involve costs for the decision maker (Ohtsubo & Watanabe, Reference Ohtsubo and Watanabe2008) are perceived as especially informative about character.” (ibid.: 74)
Thus, manipulators will likely tell stories involving themselves performing statistically rare or otherwise extreme acts that came at a personal cost. These stories will be accompanied by grand displays of emotion – e.g., warmth and compassion toward the vulnerable, great sadness for the existence of evil, anger, and vengefulness toward wrongdoing, and so on. Competent epistemic agents will take these stories as signals of moral integrity – and thus be more willing to place their trust in the manipulator.
So, because of a perceived risk competent epistemic agents will look to responsibly offload epistemic labor. Manipulators will set themselves up as possible sources of epistemic information, and manufacture evidence of epistemic ability, benevolence toward their audience, and moral integrity. Competent epistemic agents will look for evidence of these factors when trying to determine to whom they can responsibly offload epistemic labor. They will find the manufactured evidence and come to view the manipulator as a trustworthy source of information. This is how the illusion of epistemic trustworthiness is built. Importantly, in this story, competent epistemic agents are largely doing what they ought to do when seeking to offload epistemic labor. And they can be duped in this way without the presence of significant epistemic vice or epistemic mistakes. Once a manipulator has duped an epistemic agent in this way, that agent will be willing to endorse other contrarian claims made by the manipulator. And the manipulator can exploit the agent more easily for personal gain. For example, by employing these tactics, Alex Jones can afford to spend nearly $100k a month (presumably, largely from income from InfoWars) without returning any genuine epistemic services to his audience.Footnote 39 This is largely what makes the use of these tactics so manipulative – the willingness and active attempt to sacrifice epistemic goods for personal gain.
5. Fact-checking vs. undercutting epistemic trust
Epistemic institutions – such as universities and news outlets – have noticed the widespread nature of contrarian beliefs. Many of these institutions have begun trying to correct these beliefs. A common strategy for doing so is the practice of fact-checking. AP News has created a section of its website called “AP Fact Check.” The goal of this site is to evaluate and discredit misinformation (e.g., contrarian theories). Other news sites have implemented similar practices.Footnote 40 Intuitively – and from personal experience – fact-checking can work to combat contrarian beliefs. For a long time, I believed that vitamin C helped combat the common cold. It was only when I encountered someone fact-checking that claim that I changed my mind. As it turns out, there is “no consistent effect of vitamin C was seen on the duration or severity of colds in the therapeutic trials” (Hemilä and Chalker Reference Hemilä and Chalker2013). There is empirical evidence that backs up this personal anecdote. For example, Swire et al. (Reference Swire, Berinsky, Lewandowsky and Ecker2017) showed that fact-checking can be effective at changing the strength of people's contrarian beliefs.Footnote 41 However, as stated at the beginning of this paper, belief in contrarian theories can be explained in (at least) a few different ways. These beliefs could be held because of epistemic vice, epistemic mistake, or epistemic manipulation. It seems plausible, prima facie, that how people come to hold contrarian theories will affect strategies for correcting those beliefs. For example, if you know that someone believes a contrarian theory because they made an epistemic mistake, correcting that belief might be as simple as doing some fact-checking. If someone believes a contrarian theory because of epistemic manipulation, the correction might not be that simple. In this section, I will argue that – while fact-checking can be a worthwhile endeavor – it will often be an ineffective strategy against combating contrarian beliefs held because of epistemic manipulation. Instead, I suggest we ought to try engaging in the practice of trust undercutting.
The notion that fact-checking is sometimes ineffective in correcting contrarian beliefs has been noted by both philosophers (Nguyen Reference Nguyen2020, Reference Nguyen2023; Novaes Reference Novaes2020) and scientists (Hart and Nisbet Reference Hart and Nisbet2012; Nyhan and Reifler Reference Nyhan and Reifler2010; Nyhan et al. Reference Nyhan, Reifler, Richey and Freed2014). For example, Nyhan and Reifler (Reference Nyhan and Reifler2010) showed that attempts to correct the beliefs of ideological groups often fail. Even worse, their results found that correction attempts could “backfire,” causing the ideological group to believe the corrected claims even more strongly (ibid.). The illusion of epistemic trustworthiness can give us an explanation as to why this is – and suggest a possible alternative method of correction in cases of epistemic manipulation. In cases of epistemic manipulation, fact-checking is likely to fail to correct contrarian beliefs because the trustor's trust in the manipulator gives the trustor a reason to doubt the fact-checking source. To see why, consider the following two cases:
Fact-Checking the BBC: Assume that you have come to trust the BBC. You read a BBC article one morning about Taylor Swift breaking records during the Grammy nomination process.Footnote 42 You come to believe that these events have transpired. Later, you see an article on Facebook claiming that Taylor Swift hadn't broken any records.
In this case, seeing an article on Facebook will not affect your belief that Taylor Swift broke records. This is because you have come to trust the BBC enough to form your beliefs based on their journalism. Additionally, however, this seems to give you a reason to actively distrust the random source you saw on Facebook. This source is telling you the opposite of something you know to be true. The same thing will happen to people who have come to trust manipulators. For example:
Fact-Checking InfoWars: Beth has come to trust InfoWars. She sees an article saying Grammy's lied about Taylor Swift breaking records to further the feminist movement. Later, Beth sees an article by the BBC saying Taylor Swift has broken records during the Grammy nomination process.
In this case, Beth seeing the BBC article will not affect her belief. This is because she has come to trust InfoWars enough to form her beliefs based on their journalism. Additionally, the BBC contradicting Alex Jones gives Beth a reason to distrust the BBC. They are, after all, telling her the opposite of something she believes very strongly. Again, this tracks with empirical work in this area. Perhaps unsurprisingly, “the persuasiveness of a message increases with the communicator's perceived credibility and expertise” (Lewandowsky et al. Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012). Additionally, Walter and Tukachinsky (Reference Walter and Tukachinsky2020) found that “corrections [of misinformation] are less effective if the misinformation was attributed to a credible source.”Footnote 43 That is, if someone perceives a manipulator to be a credible source, their message is likely to be believed over and above competing sources.Footnote 44 Thus, the illusion of epistemic trustworthiness can explain – at least in some cases – why fact-checking fails to correct beliefs or backfires.
As we can see from the above examples, once someone trusts a source of information they will view sources offering conflicting views skeptically, or even come to distrust those alternative sources altogether. In cases like these, fact-checking likely won't be successful. Here we need an alternative practice for combatting contrarian beliefs. One possible strategy is to undercut the trustor's trust in the manipulator. If someone is a competent epistemic agent, then they will have formed their trust in InfoWars by looking for evidence of epistemic ability, benevolence toward their audience, and moral integrity. Thus, undercutting trust in a manipulator will involve undercutting evidence of these factors. This strategy tracks well with some suggested corrections in the empirical literature. For example:
Credibility Corrections: “Corrections should criticize the credibility of the source of the misinformation. This serves two functions. First, source credibility is central to processing the initial (mis)information but not for the correction source. Thus, trying to undo the damage done by climate change deniers and vaccine skeptics with messages that rely, primarily, on the expertise of their sources is likely to be futile. Instead, the correction should focus on discrediting the sources of misinformation. For example, rather than emphasizing the knowledge of a climate science expert, messages should highlight the lack of expertise and relevant training of climate change skeptics. Second, questioning the credibility of the misinformation source can enhance the coherence of the corrective message. Put differently, discrediting the source as biased and lacking goodwill can explain the spread of the misinformation and make it easier for message consumers to maintain a coherent mental model that dismisses the misinformation.” (Walter and Tukachinsky Reference Walter and Tukachinsky2020)
To make trust undercutting more concrete, consider the case of Beth once more. She trusts InfoWars – and this explains her contrarian belief. Fact-checking is likely to either not work or backfire. Instead, it may be prudent to undercut her trust in InfoWars. This could involve undercutting Beth's perceptions of Jones' epistemic ability – e.g., showing that Jones' claims are incoherent, can't be independently verified, or that he lacks intelligence and doesn't have the proper credentials. However, these epistemic corrections are likely to be extremely difficult to implement. Perhaps a better starting point would be to show Beth that Jones is not benevolent toward his audience, or that he lacks moral integrity. For if Beth conforms to general trends, she would not trust someone she views as malevolent or lacking moral integrity. These are possibly easier points to challenge than unraveling Jones' entire worldview (which Beth has likely adopted). Of course, this correction is likely to be very difficult. And it is not guaranteed to result in competent epistemic agents coming to believe the official story. Rather, successful implementation would leave Beth freed from her trust in a manipulator. Hopefully, this would allow her to find better sources to place her trust in. While the remaining difficulties are large, there was never likely to be a magic bullet for correcting contrarian beliefs formed by manipulative means. At the very least, trying to help people stop trusting epistemic manipulators seems like a good place to start.
6. Conclusion
One explanation for contrarian beliefs is epistemic manipulation. In this paper, I showed one mechanism by which manipulators can get (even competent) epistemic agents to endorse contrarian theories. I call this mechanism the illusion of epistemic trustworthiness. Manipulators build the illusion of epistemic trustworthiness by manufacturing evidence of epistemic ability, benevolence toward their audience, and moral integrity. By manufacturing this evidence, manipulators can get epistemic agents to view the manipulator as a trustworthy source of information. When an epistemic agent views a manipulator as trustworthy, fact-checking will be an ineffective way of combating the agent's contrarian beliefs. Instead, we ought to engage in the practice of trust undercutting. This involves undercutting evidence of epistemic ability, benevolence, and integrity. Hopefully, this strategy will allow the epistemic agent to be more open to finding better epistemic sources to place their trust in.Footnote 45