1. Introduction
In this article, I argue that the idea that conspiracies tend to fail, which is often offered as justification for dismissing conspiracy theories, is not well grounded and does not provide a strong reason for skepticism about such theories. This adds to the literature supporting the view that each conspiracy theory ought to be judged on its own particular evidentiary merits or faults, not merely by appeal to generalities.
The notion that conspiracies tend to fail appears to play a significant role in influencing “sophisticated” people's opinions about conspiracy theories. Though this notion has been often explicitly asserted, it is even more often implied. The dismissive remark, “someone would have talked,” for example, is fairly ubiquitous. And the perceived implication plays a major role in supporting the view that conspiracy theories are not to be taken seriously. However, there is little reason, prima facie, to think that conspiracies do tend to fail. We don't have strong grounds to believe, for example, that most petty conspiratorial crimes are foiled, or that covert operations are either exposed or backfire. After all, it is difficult to know how many succeed. If we are to be confident that conspiracy theories can be dismissed because conspiracies tend to fail, we need a substantial argument showing that the relevant kinds of conspiracies actually do fail. But such an argument, I maintain, appears to be lacking.
I begin by reviewing and augmenting David Coady's reasons for skepticism regarding the claim that conspiracies tend to fail.Footnote 1 I then turn to a particular recent assertion that conspiracies “tend to fail” (Douglas et al. Reference Douglas, Uscinski, Sutton, Cichocka, Nefes, Chee and Deravi2019: 4), seemingly supported by four sources. I discuss two of them at length, namely Keeley (Reference Keeley1999) and Grimes (Reference Grimes2016). I focus particularly on showing that one of those studies, namely Grimes (Reference Grimes2016), is seriously flawed in multiple ways, and thus fails to provide a substantial reason to dismiss conspiracy theories, even fairly large ones. My critique of Grimes adds substantially to a more limited, and a slightly flawed, critique by Dentith (Reference Dentith2019), and puts it squarely in the context of the general claim that “conspiracies tend to fail.”
2. Do conspiracies tend to fail?
A number of people, apparently going back as far as Machiavelli (see Coady Reference Coady and Dentith2018: 179), have claimed that conspiracies tend to fail. Coady also cites Karl Popper and Daniel Pipes as making essentially the same claim. In their own way, Sunstein and Vermeule (Reference Sunstein and Vermeule2009) make the claim as well, at least with respect to “open societies.”Footnote 2 Responding to this general idea, Coady writes, “The argument that conspiracies tend to fail because they always or usually end up being exposed is mistaken in two ways. First, there is no reason to believe the premise is true. Second, the conclusion does not follow from the premise” (Coady Reference Coady2012: 117; cf. 2018: 180). Supporting the first point – that there is no reason to believe that conspiracies are usually exposed – Coady points out that no examples of secrets that have been kept successfully to this day can be offered, no matter how numerous they are.Footnote 3 So, we cannot determine the fraction of exposed conspiracies relative to the total number of conspiracies (including both those revealed and those not revealed), because we have no way of knowing the number of those not revealed. Further, based on what we do know, Coady rhetorically asks, “Does ‘the U.S. government regularly engage in conspiratorial and clandestine operations’? No one familiar with US history could think otherwise” (Coady Reference Coady2012: 120). This consideration actually gives us a reason to think that certain kinds of conspiracies do not tend to fail, at least in the sense of their specific details becoming widely known.
Supporting Coady's second point – that the tendency of conspiracies to be exposed does not imply that they have a tendency to fail, Coady points out that temporary and imperfect secrecy is sufficient for the success of many conspiracies. So, just because a conspiracy is exposed, that does not mean that the conspiracy failed. The conspiracy only needs to be sufficiently secret long enough for the purposes of the conspirators to be achieved or their interests furthered. The orchestrated propaganda supporting the Iraq war could be considered an example. At least some fairly mainstream sources accept this as having been a genuine conspiracy, but this acceptance came too late to have any substantial impact.
However, it could be responded that, for a conspiracy, being exposed is a kind of failure. And it is indeed the kind of failure that is relevant when addressing the plausibility of conspiracy theories that have been around for a considerable time. However, one may reasonably wonder whether different conspiracies requiring different degrees of secrecy have corresponding differences in design, execution, and cover-up efforts. And so, one cannot draw a firm conclusion about conspiracies that would seem to require long-lasting secrecy – perhaps because of their degree of “toxicity” (see Basham Reference Basham2018) – from facts about the revelation of conspiracies that did not require such long-lasting secrecy to achieve their goal and avoid serious blowback. Further, the sort of secrecy that a conspiracy requires should be considered. Generally, it is not necessary that nobody besides the conspirators knows about it. It may be enough that the majority of the most relevant population does not believe it, or even just feels powerless to do anything about it. This may require nothing more than plausible deniability.
3. Sources supposedly supporting the idea that conspiracies tend to fail
The above considerations notwithstanding, the claim that conspiracies tend to fail continues to be made in ways that seem to cast doubt on the plausibility of conspiracy theories. For example, in a recent survey of the social science literature on conspiracy theories, called “Understanding Conspiracy Theories,” Karen Douglas and her colleagues remark, “Conspiracies such as the Watergate scandal do happen, but because of the difficulties inherent in executing plans and keeping people quiet, they tend to fail” (Douglas et al. Reference Douglas, Uscinski, Sutton, Cichocka, Nefes, Chee and Deravi2019: 4).Footnote 4 Douglas et al. cite four sources: Popper (Reference Popper1972), Keeley (Reference Keeley1999), Dai and Handley-Schachler (Reference Dai and Handley-Schachler2015), and Grimes (Reference Grimes2016), giving the impression that this is a fact that has been established in the literature. However, none of these sources provide convincing evidence in support of this claim, and only Grimes (Reference Grimes2016) really tries to do this. Dai and Handley-Schachler (Reference Dai and Handley-Schachler2015) and Popper (Reference Popper1972), first of all, are not really germane. Dai and Handley-Schachler (Reference Dai and Handley-Schachler2015) is specifically about accounting fraud, and it does not suggest that conspiracies to commit such fraud tend to fail.Footnote 5 And Popper's argument is about “the conspiracy theory of society,” which imagines that conspirators are controlling everything. It argues that this must be wrong because “nothing ever comes off exactly as intended” (Popper Reference Popper1972; see Coady Reference Coady2006: 130). Popper's argument does not provide strong reasons to think that less radical conspiracies tend to fail.Footnote 6 As for Keeley's article, though it does not provide a substantial argument in support of the notion that conspiracies tend to fail, it is nevertheless worth discussing in this context, and thus is the subject of the following section. Of the four, only Grimes really provides an argument that, if successful, would support Douglas et al.'s claim. However, as I will argue below, it is not successful.
Before addressing Keeley and Grimes, it is worth noting that, after Douglas et al. claim that conspiracy theories tend to fail, the very next sentence asserts: “When conspiracies fail – or are otherwise exposed – the appropriate experts deem them as having actually occurred” (Douglas et al. Reference Douglas, Uscinski, Sutton, Cichocka, Nefes, Chee and Deravi2019: 4). In support of this claim, Douglas et al. cite Levy (Reference Levy2007). But that article, which does seem to assume something like this, does not actually provide any evidence for it. In fact, unless the claim is understood as a tautology – where “exposure” is defined in terms of expert acceptance – the idea that “the appropriate experts” will endorse the claims of whistleblowers appears to be empirically false. For leaks, confessions, and such, pointing to a conspiracy do not reliably lead to experts and mainstream sources deeming that a conspiracy actually occurred, as discussed below (both in the next section and section 5.4). Determining whether or not experts should affirm such leaked allegations of conspiracy, in any particular case, is often a nontrivial task.
4. Keeley (Reference Keeley1999), “Of Conspiracy Theories”
Keeley's essay does not, strictly speaking, argue that conspiracies tend to fail. Rather, Keeley seems to suggest that we are not warranted in believing “mature” conspiracy theories, namely, those that have been around a while and have been investigated. Keeley seems to think that if the national media does not report damning evidence, then it is probable that no such evidence exists. For otherwise we would have to increase the scope of the conspiracy, implicating the press and others, to the point that we couldn't have confidence in anything. While I think there are problems with Keeley's analysis, the relevant point here is that what Keeley is arguing is different from the idea that conspiracies tend to fail. If Douglas et al. want to use his article as evidence, they should explain how Keeley's position justifies theirs.
Keeley does make some incidental remarks about the likelihood of explosive secrets being successfully kept. Perhaps this is what Douglas et al. had in mind: Regarding conspiracy theories about the Oklahoma City bombing, Keeley writes, “[I]t is impossible to believe that not a single member of the BATF stationed in Oklahoma would be moved by guilt, self-interest, or some other motivation to reveal that agency's role in the tragedy, if not to the press, then to a lover or a family member” (Keeley Reference Keeley1999: 124). While seemingly reasonable, this is really an expression of personal incredulity stated as though it is an objective fact. By citing Keeley in the way that they did, Douglas et al. give the appearance of objective grounding to an idea that is, it now appears, grounded merely in incredulity that may not be warranted. Indeed, as far as this quotation goes, the idea that conspiracy theories tend to fail seems more a premise than a conclusion. Keeley does offer some support for that premise, however, which I'll get to in a moment. But first notice some problems with Keeley's inference. He seems to be arguing: (1) If the BATF was involved, someone would have talked. (2) Nobody did. (3) Therefore, the BATF was not involved. However, not only is the first premise questionable, the second one is as well. We have no idea how many secrets of the kind in question are told to friends, lovers, and relatives. In general, we just don't have any access to that type of information. If a secret is passed in this way, it may just stop there. Or, it may spread a bit among a few other intimates. Or, it may spread a bit further and become a rumor within certain circles. Most people would have no knowledge of these minor “leaks.” If the rumors spread even further, we might hear of it. But even if we did, it would just be a rumor to us.
Further, in general, it is not true that such rumors reliably bring down conspiracies, nor is it true that such rumors do not exist in relation to mature conspiracy theories. Some of these are more than mere rumors; they are descriptions of ear-witnesses of confessions or at least seemingly self-incriminating statements. Examples regarding the JFK case include the following: E. Howard Hunt, a CIA operative and convicted Watergate conspirator, made a deathbed “confession” regarding the JFK assassination, though he admits only to being a “bench warmer” on the periphery of the plot. In addition, while apparently intoxicated, David Morales, a senior CIA official who was reportedly involved in assassination plots against Castro and in other controversial covert CIA operations, is reported to have made a seemingly incriminating exclamation about the JFK assassination. Specifically, regarding JFK, Morales said, “We took care of that son of a bitch, didn't we?”Footnote 7 Also, regarding the assassination of Martin Luther King, a man named Lloyd Jowers confessed to playing a small role, and he implicated government agencies. A jury agreed in a civil case brought by the King family, but the government just pooh-poohed the whole thing, and the media paid little attention. I offer these examples not as evidence of government involvement in these assassinations. Here they serve as evidence that confessions and self-incriminatory statements are not enough to turn a conspiracy theory into an acknowledged fact. For it is often hard to know how much stock one should put in such statements. After all, in addition to the examples I have mentioned, there are many more clearly dubious “confessions” or self-incriminatory statements. And there is no shortage of interesting borderline cases.Footnote 8 (The fact that leaks, or what appear to be leaks, do not reliably lead to the acceptance of conspiracy theories is also relevant to the “demonstration” offered by Grimes, and will be discussed more below.)
Now back to Keeley's reasoning. As a way of supporting the claim quoted above, Keeley notes, “Government agencies, even those as regulated and controlled as the military and intelligence agencies, are plagued with leaks and rumors” (Keeley Reference Keeley1999: 124). However, it is not clear how true this is, or what the implications are. First, there is an important difference between a leak and a rumor. Rumors are generally given little weight, and even more clearly fail to bring down conspiracies. Indeed, rumors are not that different from conspiracy theories themselves.Footnote 9 Second, it is not clear that intelligence agencies are plagued by leaks, constantly foiling their covert operations and outing their operatives. That actually seems fairly rare, given that such operations are presumably continuously ongoing. And so, thinking about the military and intelligence agencies, again, suggests that certain types of conspiracies do not tend to fail in the relevant sense.
Keeley also asserts, “To propose that an explosive secret could be closeted for any length of time simply reveals a lack of understanding of the nature of modern bureaucracies” (Keeley Reference Keeley1999: 124). That seems to be a common view. But not every informed and sophisticated person shares it. Fletcher Prouty, author of The Secret Team, for example, does not seem to agree (Prouty Reference Prouty2011). And Prouty had relevant firsthand knowledge of how plans were compartmentalized to help maintain secrecy and assure that the explosiveness of any particular leak would be limited. Prouty was the liaison between the military and the CIA in the early 1960s. He also suggests that some secret-keepers are professional secret-keepers. And their reliability as secret-keepers shouldn't be conflated with that of loose-lipped political operatives.
In addition, making a point that will also be made by Dentith against Grimes (discussed below), Lee Basham remarks:
Keeley overlooks the implications of the remarkably hierarchical nature of our civilization's institutions of commerce and control. … Far from supposing that leaks inevitably expose the “explosive secrets” of conspiracy, we might reason that the more fully developed and high placed a conspiracy is, the more experienced and able its practitioners at controlling information and either co-opting, discrediting or eliminating those who go astray or otherwise encounter the truth. … [W]e do know this: the existence of “openly secretive” governmental and corporate institutions is the norm in contemporary civilization. Despite the occasional leaks they appear to have been quite successful in their control of extremely disturbing information. This is all the greater an achievement given their status as widely recognized agencies of secrets and secret projects. We can only boggle at the difficulties of reliably revealing the aims and means of competent organizations that systematically hide their very existence. (Basham Reference Basham2001: 272)
As for the press, Basham has emphasized that some stories are too “toxic” to report (2018: 281–8).Footnote 10 In that context, even if a few individual reporters believed that a story deserved to be told, they may easily be thwarted by their superiors. Knowing that bucking the system in such a case would likely be career limiting, few would attempt it. And those that try, ostracized by the mainstream, would typically have only marginal influence anyway. Since this is predictable, there is little incentive to buck the system. Or, at least, such a dynamic is plausible and should not be summarily dismissed.Footnote 11 (One might think of Gary Webb in this context, see Schou Reference Schou2009.)
Admittedly, Basham's considerations are far from decisive. The adequacy of his reasoning and evidence, in comparison to Keeley's, could be debated at length. But the important point here is precisely that, contrary to the impression given by Douglas et al., the degree to which the truth about a conspiracy may be effectively repressed, even if it is not fully concealed, is not a settled matter.
Even Keeley himself gives us a reason to think that the kinds of conspiracies in question might not tend to fail. He writes: “[T]he conspiracy theorist is working in a domain where the investigated actively seeks to hamper the investigation … As evidenced by any number of twentieth century, U.S. government-sponsored activities (take your pick), we have reason to believe that there exist forces with both motive and capacity to carry out effective disinformation campaigns” (Keeley Reference Keeley1999: 120–1, emphasis in original).
Taking all this into consideration, we can't say with justified confidence that conspiracies of the kind in question will tend to fail on account of the inability of people to keep quiet. And so, we can't dismiss conspiracy theories on the assumption that, if true, they would have been revealed. Even Keeley admits, ultimately, that “The best we can do is track the evaluation of given theories over time and come to some consensus as to when belief in the theory entails more skepticism than we can stomach” (1999: 126).
One final note about the citation of Keeley (Reference Keeley1999): Keeley's essay was certainly a groundbreaking one, but it has been much criticized in the philosophical literature on conspiracy theories (see Basham Reference Basham2001, Reference Basham2018; Clarke Reference Clarke2002: 140–3; Coady Reference Coady2003; Räikkä Reference Räikkä2009: 188–98).Footnote 12 Indeed, it is perhaps the most contested article on this topic. So, citing it generically, as if it represented uncontested scholarship, is questionable in itself. Further, the specific notion at issue here, that conspiracies tend to fail because maintaining adequate secrecy is nearly impossible, has been explicitly challenged in the philosophical literature.Footnote 13 But Douglas et al. give no indication that there is any controversy regarding either Keeley's arguments in general or this issue in particular. In sum, the citation of Keeley (Reference Keeley1999) gives a contested claim that is not even unequivocally supported by that article an undue air of authority.
5. Grimes (Reference Grimes2016), “On the Viability of Conspiratorial Beliefs”
Finally, we are left with Grimes (Reference Grimes2016), which I will discuss at greater length because, unlike the rest, it does at least seem to provide significant support for Douglas et al.'s claim, at least if that claim is assumed to refer to fairly large conspiracies. Indeed, Grimes offers what is probably the “best” (i.e., most seemingly authoritative) case for the claim that conspiracy theories will tend to fail in a way that is relevant to our assessment of conspiracy theories. Grimes purports to have provided a “simple mathematical demonstration of the untenability” of certain conspiracy theories (2016: 14).
Grimes's model is built on the assumptions that conspirators are generally dedicated to the concealment of the conspiracy and that the conspiracy would be exposed by any leak from one of the conspirators (2016: 3). He uses the following three exposed conspiracies to estimate the probability, over time, of such exposure, relative to the size of the conspiracy: (1) The NSA's PRISM surveillance program exposed by Edward Snowden. (2) The Tuskegee syphilis experiment, exposed by Peter Buxtun, which involved the US Public Health Service deceiving African-American men about their disease, and observing them but essentially not treating them. And, (3) the FBI forensic scandal, in which Supervisory Special Agent Frederic Whitehurst uncovered and exposed scientific misconduct in the FBI crime lab. Grimes uses his results to estimate the likelihood of certain specific conspiracy theories as well as to justify the general claim that large conspiracies are likely to quickly fail and so mature theories that imply large conspiracies are untenable.
In Grimes's own words, he developed “a simple mathematical model for conspiracies involving multiple actors with time, which yields failure probability for any given conspiracy” (Grimes Reference Grimes2016: 1). He uses this to argue that “large conspiracies (≥1000 agents) quickly become untenable and prone to failure” (2016: 1). Grimes acknowledges that “historical examples of exposed conspiracies do exist.” Indeed, it is the three above-mentioned “known scandals” that provide a basis for his model's parameters. He attempts to use his model to estimate the likelihood of “some commonly-held conspiratorial beliefs,” and to thereby demonstrate that they are highly unlikely. Specifically, he uses his model to evaluate the following ideas: “the moon-landings were faked, climate-change is a hoax, vaccination is dangerous and that a cure for cancer is being suppressed by vested interests” (2016: 1). He finds, “Simulations of these claims predict that intrinsic failure would be imminent even with the most generous estimates for the secret-keeping ability of active participants” (2016: 1, emphasis added).
Grimes's article garnered significant attention. It is described in the Internet Encyclopedia of Philosophy as follows: “Grimes (Reference Grimes2016) has conducted simulations showing that large conspiracies with 1000 agents or more are unlikely to succeed due to problems with maintaining secrecy” (Pauly, Reference Paulyn.d.).Footnote 14 In addition, Grimes's study was covered favorably in the press. An article in Mother Jones magazine suggests that Grimes succeeded in showing certain conspiracy theories to be “really dumb” (McDonnell Reference McDonnell2016). BBC likewise covered the study uncritically (Berezow Reference Berezow2016), as did PBS (Barajas Reference Barajas2016)Footnote 15 and The Washington Post (Ohlheiser Reference Ohlheiser2016), as well as LiveScience, ScienceDaily, and Wired. The Wired article prominently proclaims, “The equation shows most conspiracy theories would have unravelled by now” (Reynolds Reference Reynolds2017, emphasis added) – which it did not even claim to do, as it only addressed purported conspiracies that are large and have been around for some time.
Intuitively, it seems that some versions of prominent conspiracy theories posit conspiracies so large and ambitious that it seems very likely that they, if true, would have been exposed. Grimes aims to provide “a clear rationale for clarifying the outlandish from the reasonable” (Grimes Reference Grimes2016: 2). He develops a model for the likelihood of exposure of a conspiracy based on real conspiracies that have been exposed, and as mentioned above, he applies his model to conspiracy theories involving the moon landing, climate change, cancer cures, and vaccines. However, in reality, not all versions of such theories imply the same scope of conspirators or have the same susceptibility to exposure.Footnote 16 And though Grimes refers to “commonly-held conspiratorial beliefs” (2016: 1), it is not clear that the versions that are genuinely commonly held really have the characteristics that make them susceptible to the kind of failure Grimes focuses on.
In any case, if Grimes's method is valid, it should work for all conspiracy theories that have numbers in the right ranges. And further, Grimes seems to assume that, for example, all conspiracy theories alleging some sort of cover-up of vaccination harms would have to be large enough in a relevant way to be essentially ruled out as too likely to have already been exposed (see section 5.5 below). If he has not shown that, the door remains open to the possibility that there are versions of these types of conspiracies that might not be reasonably expected to have failed. Grimes clearly does not want that. The whole point of his project is to shut that door. This is fairly clear from the article itself, but it is unambiguously clear from Grimes's later comments about the article, and it is corroborated by the way the article has been read by journalists seemingly eager to promote Grimes's findings. And this is, in any case, the interesting question: Has he shown that we may safely dismiss conspiracy theories of the general types that he describes? I argue that he has not.
5.1. General problems with Grimes's analysis
Grimes's argument has serious flaws. For one, as discussed above, although we know of some conspiracies that were exposed, we don't know how many were not exposed. Grimes seems to think that we can take a few cases that were exposed and use those to judge how long it takes for exposure. However, he is working from a potentially biased sample. We also need to know how many real conspiracies have never been discovered or proven. But, of course, we cannot know that. While it might make some sense to make approximations based on the ranges we find in existing cases, Grimes is dealing with a very small sample (three revealed conspiracies) which differ considerably from each other. They are as follows: (1) the Tuskegee Syphilis Experiment, in which African-American men with syphilis were studied without being significantly treated, while the fact that they had syphilis was concealed from them; (2) the FBI forensic scandal, which involved the conviction of many people based (at least in part) on misleading testimony and bogus forensics, revealed by Frederic Whitehurst; and (3) the NSA PRISM project, involving the U.S. National Security Agency's massive data collection, which was brought to light by Edward Snowden.
Indeed, the three examples do not even seem to be of the same type, such that modeling for that type might be thought appropriate. (1) The Tuskegee Experiment was completely different from the other two, particularly in involving the publicly available documentation of what was going on, published in scientific journals. This made what was going on undeniable once sufficient light was shone upon it. In this important respect, it does not resemble most purported conspiracies. (2) The FBI scandal, compared with the other two examples, seems somewhat pedestrian, involving ongoing corrupt practices at a relatively low level, rather than a particular elaborate plot run from on high. In this way, it also differs from many conspiracy theories. And, while I don't mean to minimize the severity of the retaliation suffered by Whitehurst for his revelation of the fraud that was occurring, it is hardly comparable with Edward Snowden's case. (3) Snowden's revelations about the NSA were truly extraordinary, requiring great sacrifices and even greater risks on the part of Snowden to reveal them. These revelations also required incredibly meticulous execution. So, it is hard to see how Snowden's case can be used to determine what should be expected to happen.
5.2. Grimes's model is problematic
Dentith has argued that the examples on which Grimes's mathematical analysis is based do not actually fit his conceptual model of how conspiracies fail. Grimes's model is ostensibly based on examples of conspiracies that were exposed by leaks from insiders.Footnote 17 And yet the revelations do not come from conspirators in any of the three examples that form the basis for his model. Regarding the FBI Forensic Scandal, Frederic Whitehurst revealed fraud in his laboratory that he had not participated in. And, in contrast to how Grimes portrays the revelation of the NSA surveillance programs, “Snowden's narrative is that of being a whistleblower: an outsider – rather than a conspirator – who discovered the existence of the conspiracy” (Dentith Reference Dentith2019: 2257). Similarly, regarding the Tuskegee Experiment, Dentith explains:
Information about the experiment was openly published in medical journals; the cover-up, so to speak, was that the patients were not told about the experiment. Once again, it was outsiders who then revealed the existence of the conspiracy. For example, Peter Buxton – who is often considered as the whistleblower in this case – came to know about the experiments because of his job with the United States Public Health Service. He was not a conspirator but, rather, a worried public official who leaked the information to the press because his worries were not taken seriously by management. (Dentith Reference Dentith2019: 2257)
Dentith concludes that Grimes's examples “fail to capture the very thing he wants to measure” (2019: 2257).
While there is a flaw in Dentith's critique, the general thrust can nevertheless be sustained. Specifically, Grimes could legitimately respond that the conspiracies that he chose for his model, which occurred within the NSA, NHS, and FBI, were exposed by people who were in those institutions. So, the conspiracies were exposed by insiders, even if they were not conspirators. While other real conspiracies were not exposed by insiders in this sense (e.g., COINTELPRO), in Grimes's examples they were. And it would be fair for Grimes to point out that people within such institutions may be more likely than outsiders to become aware of conspiracies going on within those institutions. Still, the number of individual employees that are actually positioned to become aware of a conspiracy within an institution is likely to vary widely, depending both on the nature of the institution and the nature of the conspiracy. Presumably, fewer people were in a position to know or find out about the scope of NSA spying than were in a position to know or find out about the Tuskegee Experiment, about which scientific articles were being published. And this scope of potential access has little if anything to do with the size of the institution as a whole. And yet Grimes's model is based on the size of these large institutions, not the size of the actual conspiracy, nor the number of people with a heightened potential of finding out. One can hardly blame Grimes for not basing his calculations on the latter category, even though it is much more relevant than the size of the whole institution. For it seems hopelessly vague and immeasurable. But that just means that the kind of study that he wants to do may not be feasible.
5.3. Grimes's estimates are not reasonable
Relatedly, Grimes's application of his model is also problematic. When he evaluates “commonly-held conspiratorial beliefs,” he is not sufficiently generous regarding the size of the conspiracy they entail. As Dentith again explains:
A conspiracy can look big, yet only a small number of people involved in it might know its full extent or aim. Some members of the conspiracy will be lackeys, goons or even unwitting conspirators. Not everyone in the NSA need necessarily know that the data they are collecting and processing has been illegally obtained, and FBI agents who were using forensic evidence to secure convictions may not have been informed by senior personnel that the kind of evidence they were relying upon was of dubious merit. It is even possible to be involved in a conspiracy without realising you are conspiring. (Dentith Reference Dentith2019: 2258)Footnote 18
Ignoring all such nuances, Grimes simplistically assumes that the appropriate number of conspirators to assign to the “Moon-landing Hoax” conspiracy theory, for example, is the “Peak NASA employment” in 1965, that is, 411,000 people. The BBC dutifully reports this without question, under the title, “Math Study Shows Conspiracies ‘Prone to Unravelling’” (Berezow Reference Berezow2016). Grimes offers no good reason to think that all NASA employees would be in on it, or aware that it was a hoax. Indeed, if such a conspiracy had been attempted, it may have been managed so as to keep those “in the know” as limited as possible. If the right people were involved, there may have been some creative way to do this by leveraging NASA's institutional structure. It is not intuitively clear why the number of people truly in a position to leak the truth would have been greater than, say, a few hundred.Footnote 19 So, it seems quite possible that Grimes's estimate is off by three orders of magnitude. Grimes has done nothing to show that such an intuitive estimation is wrong.
Regarding conspiracies involving climate change, vaccine science, and suppressed cancer cures, Grimes considers the size of the conspiracy to be the size of the relevant scientific community. Consider the justification that Grimes offers for the idea that a “scientific conspiracy” would have to be “vast”:
[E]ven if a small devious cohort of rouge [sic] scientists falsified data for climate change or attempted to cover-up vaccine information, examination by other scientists would fatally undermine the nascent conspiracy. To circumvent this, the vast majority of scientists in a field would have to mutually conspire – a circumstance the model predicts is exceptionally unlikely to be viable. (Grimes Reference Grimes2016: 11)
Grimes seems to be suggesting that if there were a conspiracy involving scientific fraud, even if it was perpetrated by only a small number of scientists, it would have been exposed by other scientists, unless they too became party to the conspiracy – and at that point the theory becomes implausible. But it is far from clear that this picture is accurate. If, for example, a group of scientists at Merck stumble upon a cure to cancer, and management decides to suppress it, why would everyone at Merck have to know about that, or be in a position to discover it? Yet Grimes seems to assume that even everyone at Novartis, Pfizer, Roche, Sanofi, Johnson and Johnson, GlaxoSmithKline, and AstraZeneca would also be in a position to blow the whistle (see Grimes Reference Grimes2016: 8). Do theories about suppressed cancer cures necessarily posit or imply this? It is not clear that they do. (See section 5.5 for a similar real-life example involving vaccination science.) Grimes seems to overestimate the relevant scrutiny that exists and ignores the possible influence of pervasive bias. If a small group of scientists fudge some numbers, there may well be no indications of that fudging that peer reviewers, or other non-involved scientists, would easily be able to detect. And if the peer reviewers and other fellow scientists found the results both pleasing and well within their expectations – fitting the current narrative – they may not be particularly motivated to scrutinize the results.
It might be argued that a fraudulent study or two would not be sufficient to change the discourse on a topic such as vaccine safety or climate change. But that is not entirely clear either, at least not in every case. For example, in an evaluation of the science regarding the relationship between vaccination and pervasive developmental disorder (which includes autism), the number of studies that the Institute of Medicine relied upon was quite small. Starting with 32 somewhat relevant studies, they whittled that down to only four studies that they judged “might help with a study of the [vaccine] schedule” (Institute of Medicine 2013: 86). They found that the scientific evidence “does not suggest a causal association” between autism and the overall vaccination schedule, but they describe this as being based on evidence that is “limited both in quantity and in quality” (p. 86). So, if just a couple of experiments were carefully manipulated, and those specific individual frauds were not discovered, that may plausibly have a significant influence on the ongoing consensus, especially if the fraud was convenient to powerful interests.
Regarding the claim that climate change is “a hoax,” there are a range of positions that might fall into that category. However, for many who promote these conspiracy theories, “hoax” may be hyperbole. Let's try to imagine one possible version of this: Suppose a few people at the top of organizations with ample financial resources had interests in generating concern about carbon emissions, and so created an incentive structure that subtly encouraged scientific thought leaders to support an overly pessimistic view of the climate hazard, which in turn led to a consensus that was not fully rational (perhaps by being based on biased, though not fraudulent, studies). How many people would be in a position to expose this situation? It is not even clear exactly how this situation could be convincingly revealed, even if it were true. And, although many climate change conspiracy theories are explicitly more radically conspiratorial than this, it is not clear that they really need to deviate very far from this to preserve their central thesis. For Grimes's argument to be genuinely informative – to do more than show that some versions of such theories overstate their case – it would have to be applicable to theories resembling this. But it is not at all clear that it is. Grimes's analysis assumes that if a conspiracy theory were true, it could easily be shown to be true by any random scientist who uncovered fraud. But that doesn't seem to be correct. Even if some element of actual fraud was involved and was discovered, that would probably not suffice to invalidate the consensus about climate change, nor would it show that the central thesis of the conspiracy theory was true. The deeper conspiracy would not have thereby been revealed. For many potential cases, Grimes's assumptions about what it takes to reveal a conspiracy do not hold.
Incidentally, largely independent conspiracies of different kinds could be loosely held together thematically by a confluence of interests or ideology – petty science fraud on one level, and more subtle influence on incentive structures, or propaganda campaigns, on another level. One level could not rat out the other if they did not actually conspire together. But is this a plausible model for how the world really works? Given that petty scientific fraud occurs, and that organizations do intentionally influence incentive structures (largely aiming at the good, one may assume), the question of whether the world is like this is really one of degree, and thus is a very difficult empirical question.
In general, contrary to Grimes's central assumption, the size of an institution in which a conspiracy is alleged to mainly occur does not even roughly indicate the size of the conspiracy itself, nor the circle of people with knowledge of it. In this regard, it may be useful to consider that institutions are institutions-within-institutions, and so one needs some substantial reason for focusing on a particular institutional scope. For example, NASA is an institution within the US government. Grimes's estimates for the size of a “Moon-landing Hoax” conspiracy would have been silly if he had determined its size based on the size of the entire federal government. Grimes's estimates are less silly, but still not appropriately nuanced.
5.4. Grimes's critical assumption is unsafe: leaks are not enough
Grimes's argument depends upon a dubious assumption. Namely, Grimes explicitly assumes that “a leak of information from any conspirator is sufficient to expose the conspiracy and render it redundant” (Grimes Reference Grimes2016: 3). Not only is this a mere assumption, it seems to be a false one. There are, in fact, no shortage of leaks, confessions, and quasi-confessions in controversial cases that have not succeeded in overturning official accounts. As mentioned above, E. Howard Hunt, David Morales, and several others have made self-incriminating statements regarding the JFK assassination, and Loyd Jowers admitted involvement in Martin Luther King's murder. In the latter case, a jury even found that this was true, and that government agencies were in on the conspiracy. But that has not led to the existence of a conspiracy being accepted by the establishment. Of course, there may be good reasons to doubt those confessions, or quasi-confessions. But to merely assume that the reason existing admissions in controversial cases were not accepted by the establishment is because they were false confessions seems to beg the central question. Recall that even Deep Throat's revelations to Woodward and Bernstein were not in themselves enough to bring down Nixon. If it hadn't been for the Whitehouse tape recordings, many of the crimes involved in Watergate may well never have become widely acknowledged.
Regarding NSA spying, one of Grimes's own examples, Snowden was not the first person to reveal the kind of thing that was going on. In 2006, AT&T technician Mark Klein alleged that AT&T was cooperating with the NSA to allow the latter access to vast amounts of data (Wolf Reference Wolf2007). This was reported, and Congress got involved, but ultimately nothing much happened. In addition, NSA employees Thomas Drake and Bill Binney revealed relevant information. Drake alerted Congress and resorted to leaking information to a newspaper reporter only after his efforts to use official channels failed to be effective (Welna Reference Welna2014). He was then prosecuted; one might even say “persecuted” (see Hertsgaard Reference Hertsgaard2016). Binney also faced heavy handed efforts to keep him quiet (Welna Reference Welna2014). Despite some coverage of their situations, their revelations did not turn the massive NSA spying operation into an acknowledged reality. Other than conspiracy theorists, no one, including the mainstream media, paid much attention.Footnote 20 After all, Drake and Binney could be muzzled well enough, and otherwise written off as cranks. What made Snowden different is that he provided proof. But that is not at all an easy thing to do, as Snowden's harrowing saga dramatizes. This is, as mentioned above, a reason to think that extrapolating from this case would be of dubious validity.
Regarding the Tuskegee Experiment, which figures prominently in Grimes's analysis (and is discussed more below), even Peter Buxtun, who is credited with exposing the study in 1972, tried repeatedly to expose it before finally finding success. And further, according to Susan Reverby, a group of African-American professionals that included Public Health Service employee Bill Jenkins also tried to bring attention to the study but were unsuccessful: “The group wrote an editorial denouncing the Study's racism and ethics and sent it to the New York Times and the Washington Post, and then waited for something to happen. Nothing did” (Reverby Reference Reverby2009: 83).
Whether or not a purported conspiracy becomes acknowledged as a genuine conspiracy is not a simple matter of how much time passes before someone talks. It is rather a complex function involving a number of variables, including how carefully compartmentalized the conspiratorial activity is, the vulnerability of the whistleblowers to discreditation, the availability of evidence sufficient to prove the most damning claims (less than that and the story may well die), the plausibility of any semi-innocuous fallback position or “limited hangout,” and also the degree of desperation that exists to quash the story, which may depend on its toxicity and on what financial, political, or ideological interests may be at stake. Grimes's study does not in any way address any of these potentially confounding factors.
5.5. Grimes's problematic characterization of his study
One of the test cases Grimes considers is labeled “Vaccination conspiracy.” However, Grimes seems to conflate “conspiratorial beliefs about vaccination” with “ill-founded beliefs” and “dubious information” that is critical of vaccination (Grimes Reference Grimes2016: 3). This has the effect of suggesting that his analysis of the likelihood of conspiracies to fail has some relevance to critics of vaccination that make no conspiratorial accusations. Many people who raise concerns about vaccination, whether they are well-founded or not, emphasize the published science and argue on that basis for a more cautious approach to vaccination, or else point to specific flaws and limitations in studies supportive of vaccine safety. Contrary to what he suggests, Grimes's analysis is not actually relevant to such challenges to scientific orthodoxy.Footnote 21 A scientific consensus can be wrong without any conspiracy to maintain an incorrect view.
While Grimes's article does not specifically mention Andrew Wakefield,Footnote 22 his category of “Vaccination conspiracy” does include the idea that “there is a link between autism and the MMR vaccine,” which is an idea that is often traced back to Wakefield, who was the first author of a retracted Lancet article that hinted at a possible connection between autism and MMR vaccination. Wakefield has more recently argued that a conspiracy occurred involving CDC scientists, which covered up an association between autism and the timing of MMR vaccination in African-American boys.
A revealing radio “debate” was broadcast between Grimes and Wakefield, during which Grimes suggests that his article had established that the conspiracy Wakefield alleged would have involved 25,000–50,000 people. He says, “a CDC/WHO conspiracy would need at least fifty – twenty-five to fifty – thousand active collaborators. Now you have to insinuate that twenty-five to fifty thousand doctors, researchers, medics, people you meet, people who treat you, are actively conspiring. …”Footnote 23, Footnote 24 To this Wakefield responds, “It took five people to conspire to produce fraudulent data in this case.” That is quite a discrepancy. When challenged to justify his numbers, Grimes stated, “The number of people involved in the CDC/WHO was in a paper I did earlier on this year when I worked out how many people would have to be involved.” This clearly refers to Grimes (Reference Grimes2016). But just as clearly, contrary to Grimes's claim, his paper did nothing to establish the size of the conspiracy alleged. It merely assumed (rather unrealistically) that it must encompass the whole of both institutions.
To be fair, Grimes can most plausibly be read as merely suggesting that members of the scientific community would have uncovered and exposed the fraud, had it occurred, not that they would have been involved in the underlying conspiracy itself. Grimes explains that he “assume[s] all scientists involved would have [to] be aware of an active cover-up, and that a small group of odious actors would be unable to deceive the scientific community for long timescales” (Grimes Reference Grimes2016: 8). But it is at best unclear that this assumption would hold in a case like this. If five scientists produced a fraudulent study that purported to show that there was no connection between the MMR and autism – a result that nearly everyone in the scientific community both expected and desired – it is not obvious that this would have been easily uncovered by uninvolved scientists. The number of people in position to detect the deception might be quite small, and those people may not be particularly suspicious or eager to find fault.
Interestingly, the purported conspiracy that Wakefield and Grimes were arguing about involved an insider, CDC scientist William Thompson, who leaked information in a way that seems to fit Grimes's model. Thompson was even involved in the “conspiracy,” not just a member of the same institution who stumbled upon it. However, his revelations did not result in the “conspiracy theory” being accepted as true. This seems to further confirm that it is a mistake to assume that leaks decisively expose conspiracies. Revelations can simply be walked back, “contextualized,” and/or largely ignored by the mainstream media. When that happens, it is hard to know what to believe. In this case, in a series of recorded telephone conversations, Thompson made several quasi-confessions. He then gave a statement in which he said the collaborators on the study “intentionally withheld controversial findings” and met to get rid of documents.Footnote 25 However, another formal statement released by Thompson seems to moderate his assertions enough to allow an interpretation in which the appropriateness or inappropriateness of what had occurred may be viewed as a matter of reasonable scientific disagreement (see Thompson Reference Thompson2014). Regardless of the underlying truth in this case, the point here is that the genie can sometimes be put back in the bottle. Even if someone “talks,” we still don't know if they should be believed, or how they should be interpreted. These quite reasonable doubts can be seized upon to nullify, or disarm, the revelations. Whether or not that is really proper may be hard to adjudicate.
These difficulties are, perhaps, typical of what we can expect when there are revelations about most conspiracy theories. Grimes's examples of revealed conspiracies, in contrast, may be quite exceptional. Given that information about the Tuskegee Experiment had been published along the way, once the light was shown on it, it was impossible to deny. But most controversial conspiracy theories cannot be assumed to be analogous in this respect. And Snowden's feats, by which he was able to provide dramatic evidence, are difficult, costly, and dangerous to replicate. As for the FBI scandal, let's not forget that Whitehurst first took the matter to his superiors, who did nothing. Then, after he took the matter to the Department of Justice, he was fired, and his credibility was attacked. It was ten years later that he was finally vindicated – and that was not a forgone conclusion. The Whistleblower Network News calls him “America's first successful FBI whistleblower” (WNN Staff 2017). Again, this is not exactly a case that can be assumed to be representative with respect to its ultimate success in becoming an acknowledged fact.
5.6. Grimes's moral error
Grimes's calculations are explicitly premised on the notion that the Tuskegee Experiment was not unethical from its inception (in 1932). Grimes writes, “The [Tuskegee] study became unethical in the mid 1940s, when penicillin was shown to effectively cure the ailment and yet was not given to the infected men” (Grimes Reference Grimes2016: 6). This suggests that, prior to penicillin, it was not unethical for doctors to deceive these African-American men, to conceal the nature of their disease, and to treat them as subjects rather than as patients, performing experiments on them without their informed consent, rather than providing the standard of care current at the time (ineffective and possibly harmful though it may have been). Let's not forget that not only were these sharecroppers not informed that they had a terrible and deadly transmissible disease but were actively blocked from finding out. And yet, based on the notion that this was somehow not unethical, Grimes considers the length of time during which the conspiracy was effectively concealed to be 25 years, rather than 40. Twenty-five years being the “Time calculated from unethical experiment duration – 1947 to 1972” (Grimes Reference Grimes2016: 7).
In terms of invalidating Grimes's conclusions, this is not nearly the most significant consideration. But it provides a clear way of showing that his study is seriously flawed. While the study seems to be based on objective numbers, this critical number is skewed by an ethical judgement that is at least dubious. And yet there is no indication that the journal editor, peer reviewers of the article, Grimes's social science colleagues who cite the article, or the media outlets that promote its findings raised any objection to this. That is worth repeating: both scholars and journalists appear to have registered no serious objection to the claim that, for the first 15 years, the Tuskegee Study was not unethical.
Let's be clear about what Grimes is doing by introducing the issue of when the experiment became unethical. Although it is not possible to enter someone else's mind, it seems he is trying to prove a preconceived idea, namely, that belief in mature conspiracy theories is not rational.Footnote 26 His strategy is to use some examples of conspiracies that were revealed as a basis for determining how long it can be expected to take for a conspiracy to be revealed as a function of its size. And for his purpose, it would be helpful if the sampled conspiracies were shorter rather than longer. So, he came up with a reason for shortening the conspiracy associated with the Tuskegee Study – a practice that can be euphemistically called “correcting the data” – by implicitly defining the length of the conspiracy as the length of time the study was unethical, and then treating the first 15 years of the study as therefore not part of the period of the conspiracy. But whether or not the Tuskegee Study was ethical during those early years is not even the right question. The issue is how long a secret can be held by a fairly large number of people, not whether the secret can be considered (however implausibly) as ethical. Further, different people may have different ideas about what is ethical or right. And, by all appearances, many if not most of the people involved in the study continued to believe it was ethically justified even after penicillin was found to be an effective treatment.
5.7. The simple proof that Grimes's analysis is invalid
What is the relevance of suggesting that conspiracies tend to fail, or are unlikely to succeed? Surely it is this: to suggest that conspiracy theories are likely to be false, at least mature ones. But being likely to be false, by itself, doesn't imply that a theory should be dismissed. After all, most theories, of all kinds, including scientific theories, are likely to be false. This is where the mathematical analysis comes into play. If Grimes can show the degree of unlikeliness is sufficiently high, a dismissive attitude toward (certain) conspiracy theories might be warranted. That is what is at stake here. But has Grimes succeeded? By focusing on the Tuskegee Study (as, in the end, Grimes himself does), we can clearly see why the answer must be “no,” at least for those who accept that the Tuskegee Study was unethical and conspiratorial from the beginning.
As mentioned above, Grimes claims, “the results of this model suggest that large conspiracies (≥1000 agents) quickly become untenable and prone to failure” (Grimes Reference Grimes2016: 1). But the Tuskegee conspiracy lasted about 40 years (from 1932 to 1972). Grimes estimates the number of people in a position to blow the whistle on the study to be 6700. I've suggested that Grimes's method of estimation is invalid, even a bit silly. Let's nevertheless accept it for the sake of argument. If we do, no valid analysis, mathematical or otherwise, could reasonably support the conclusion that a conspiracy theory involving several thousand conspirators (or less) and which has been around for four decades (or less), can be safely dismissed. Once we have a precedent, conspiracies of similar scope must be acknowledged to have the potential to last a similar length of time before exposure. No amount of fancy math can change that, no matter how inventive or obscure.
There is one caveat to the idea that a clear precedent can be so powerfully revealing. If something significant has changed since the precedent-setting case, it may no longer have the same implications. However, in this case, the main thing that has changed is that potential conspirators may have learned that they must be more careful, and thus hold their secrets more closely – perhaps by not publishing what they were doing in the scientific literature the whole time! So, although there may have been many people in some way involved in or aware of the Tuskegee Experiment, if something similar were attempted today it may well be designed to be handled more discretely. At least, that is a possibility that Grimes has done nothing to refute.
Grimes's argument thus fails. This is proven straightforwardly with Grimes's own example and his own assumptions, correcting only his moral error. Indeed, his corrected analysis, if his assumptions had any merit, would give us reasons to believe that large conspiracies actually could succeed for long periods.
6. Conclusion
The idea that conspiracy theories are unlikely because conspiracies “tend to fail” is commonly asserted in academic work. But the arguments that have been given for it are not convincing. Douglas et al. (Reference Douglas, Uscinski, Sutton, Cichocka, Nefes, Chee and Deravi2019), for example, which is authored by influential figures in the relevant subfield, cited four sources seemingly to support that idea. Yet only one of those sources provided (seemingly) significant support for that claim. And that one, Grimes (Reference Grimes2016), did not stand up to scrutiny. More specifically, Grimes purports to offer a mathematical demonstration showing that theories that have been around for a long time that posit large conspiracies are highly likely to have been exposed, if they had been true. But his argument has crippling flaws. It is based on cases that are unlikely to be representative. It depends on a method of approximating the size of conspiracies that is highly unreliable, potentially off by orders of magnitude. It wrongly assumes that leaks by conspirators necessarily lead to the failure of the conspiracy in the sense of mainstream acknowledgment that it was a conspiracy. It ignores the fact that we have no good way of estimating the number of real conspiracies that have never been revealed. And it fudges the length of the Tuskegee Experiment, which is the conspiracy upon which Grimes's findings ultimately depend. Further, even if Grimes's argument had been valid, it would not show that conspiracies of a more modest size “tend to fail,” though Douglas et al.'s claim was unqualified.
Building on the arguments of Coady (Reference Coady2012, Reference Coady and Dentith2018), I maintain that the evidence that conspiracies tend to fail remains weak. Indeed, not only is it possible that lots of conspiracies have remained concealed from us, we have reasons to think that many conspiracies, such as covert operations, are actually successfully kept from us, even though we don't know the details. And we know that some conspiracies remained secret enough for a fairly long time. We also have reasons to think that even a secret that is leaked may not result in a failed conspiracy, as it may be difficult to determine if the leak is actually true.
In my own view, the fact that a larger conspiracy has more potential leakers than a smaller conspiracy does indeed give us a reason to think that the larger one would be more prone to being exposed and thereby widely accepted as true, all else being equal. But, generally speaking, not all else is equal, and size is not all that matters. The particular structure of the conspiracy also matters – How well compartmentalized is it? The institutional structures involved matter – How easily can people be made to contribute to the conspiracy without even knowing it? The individuals involved matter – How “professional” are they with regard to their conspiratorial activities? And circumstantial considerations matter – How plausibly deniable is the conspiracy if a leak occurs? These are just some of the other considerations, besides size, that influence a conspiracy's risk of failure. Such considerations must be assessed on a case-by-case basis, along with the relevant evidence, in order to judge the plausibility of any particular conspiracy theory, even a mature one that posits a large conspiracy.Footnote 27