Hostname: page-component-5b777bbd6c-sbgtn Total loading time: 0 Render date: 2025-06-19T15:25:45.844Z Has data issue: false hasContentIssue false

What’s Going On? Disinformation, Understanding, and Ignorance

Published online by Cambridge University Press:  19 June 2025

André J. Abath*
Affiliation:
The Federal University of Minas Gerais – UFMG, Belo Horizonte, Brazil
Rights & Permissions [Opens in a new window]

Abstract

Disinformation is a growing epistemic threat, yet its connection to understanding remains underexplored. In this paper, I argue that understanding – specifically, understanding how things work and why they work the way they do – can, all else being equal, shield individuals from disinformation campaigns. Conversely, a lack of such understanding makes one particularly vulnerable. Drawing on Simion’s (2023) characterization of disinformation as content that has a disposition to generate or increase ignorance, I propose that disinformation frequently exploits a preexisting lack of understanding. I consider an important objection – that since understanding is typically difficult to acquire, we might rely on deferring to experts. However, I argue that in epistemically polluted environments, where expertise is systematically mimicked, deference alone provides no reliable safeguard. I conclude by briefly reflecting on strategies for addressing these challenges, emphasizing both the need for promoting understanding and for cleaning up the epistemic environment.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1. Introduction

In February 2025, during the early days of Trump’s second term in office and amidst severe budget cuts to the United States Agency for International Development (USAID), falsehoods about the agency spread widely online. One viral video on X and other platforms, for instance, falsely claimed that Hollywood celebrities such as Angelina Jolie, Sean Penn, and Ben Stiller were each paid millions to visit Ukraine in an effort to boost President Volodymyr Zelensky’s popularity. Without providing any evidence, Elon Musk joined the attacks, posting on X: “USAID is a criminal organization. Time for it to die.”Footnote 1 In response, journalist Mike Rothschild, quoted in The New York Times, criticized Musk’s remarks stating, “He’s exploiting ignorance about the way government works and the lack of oversight over anything he’s doing” (Sanger and Wong Reference Sanger and Wong2025; my italics).

It is hardly surprising that the spreading of disinformation and ignorance are closely linked. In a book on the history of ignorance, Peter Burke (Reference Burke2023) writes that individuals and groups with certain kinds of knowledge who attempt to keep this knowledge from other groups “allow, maintain, encourage, exploit, or even require ignorance on the part of their targets” (Reference Burke2023: 197) and that “disinformation both maintains ignorance and depends on ignorance for its success” (Reference Burke2023: 221). In a recent paper, Simion (Reference Simion2024) in fact provides an account of the nature of disinformation in terms of ignorance generating content. More specifically, according to her, “X is disinformation in a context C iff X is a content unit communicated at C that has a disposition to generate or increase ignorance at C in normal conditions” (Reference Simion2024: 1213).Footnote 2

In this paper, I explore the relationship between disinformation and a particular form of ignorance – one noted by Mark Rothschild in the quote above – namely, a lack of understanding about how things work. Here I frame this idea in terms of a lack of understanding of how things work and why they work the way they do. My central claims are the following: this sort of understanding can, all else being equal, shield one from disinformation campaigns. Conversely, a lack of such understanding leaves one particularly vulnerable. Failing to understand how government agencies function, for instance, leaves one particularly vulnerable to disinformation targeting institutions like USAID. Similarly, lacking an understanding of how COVID-19 vaccines work leaves one particularly vulnerable to disinformation regarding their safety.

This is how I will proceed. In Section 2, I sketch, in broad outline, an account of understanding. In Section 3, I argue that, all else being equal, understanding how things work and why they work the way they do can shield one from disinformation campaigns. Section 4 explores the converse: that a lack of such understanding leaves one especially vulnerable to disinformation. In Section 5, drawing upon Simion’s (Reference Simion2024) conception of disinformation as content disposed to generate or increase ignorance, I argue that we should supplement this account with the insight that disinformation frequently exploits a preexisting lack of understanding about how things work and why they work the way they do. Section 6 addresses an important objection and considers potential remedies to the threat posed by disinformation campaigns. Given that understanding is hard to acquire, we typically rely on deferring to experts who possess it, or at least possess it to a greater degree than ourselves. One might argue that this reliance should suffice to shield us. I contend, however, that in epistemically polluted environments – where disinformation campaigns deliberately mimic the markers of genuine expertise – mere deference is no reliable safeguard. I conclude with a brief reflection on possible strategies for addressing this challenge, emphasizing both the need to promote understanding and to clean up the epistemic environment.

2. Understanding: a preliminary account

What is understanding, and how does it relate to disinformation? Let us begin with the first question. The philosophical literature recognizes different forms of understanding (Kvanvig Reference Kvanvig2003; Elgin Reference Elgin2007; Gordon Reference Gordon2012; Hills Reference Hills2016; Baumberger et al. Reference Baumberger, Beisbart, Brun, Grimm, Baumberger and Ammon2016). A common distinction is between propositional understanding, which takes a proposition as its object – as in “Lucas understands that it is time to go” – and objectual understanding, which in Kvanvig’s words, occurs “when understanding grammatically is followed by an object/subject matter, as in understanding the presidency, or the president, or politics” (Reference Kvanvig2003: 191). Recently, much attention has been given to explanatory understanding – or understanding-why – which is directed at a phenomenon or a question, as in “Alice understands why the space shuttle Challenger exploded” (Grimm Reference Grimm2008; Khalifa Reference Khalifa2012; Strevens Reference Strevens2013; Hills Reference Hills2016). For our purposes, however, it is not necessary to engage with the broader debate about different types of understanding or how they interrelate (for an overview of the debate, see Hannon Reference Hannon2021). In this paper, I would like to focus on understanding how things work and why they work the way they do. This may encompass elements of objectual understanding, explanatory understanding, or an overlap between the two.

How can we sketch an approach to understanding in broad strokes? While there is widespread disagreement in the literature on how to develop a detailed account of understanding, philosophers of different persuasions generally agree on one point: understanding is not merely a matter of knowing a list of facts. Rather, it involves a kind of grasping. More specifically, it involves grasping the connection between various pieces of information. Kvanvig (Reference Kvanvig and Starr2018: 699) puts the point in the following way: “Central to the notion of understanding are various coherence-like elements: to have understanding is to grasp explanatory and conceptual connections between various pieces of information involved in the subject matter in question”.

Elgin (Reference Elgin2007: 35) makes a similar point:

Understanding is primarily a cognitive relation to a fairly comprehensive, coherent body of information. The understanding encapsulated in individual propositions derives from an understanding of larger bodies of information that include those propositions. I understand that Athens defeated Persia in the battle of Marathon, because I grasp how the proposition stating that fact fits into, contributes to, and is justified by reference to a more comprehensive understanding that embeds it.

Grimm (Reference Grimm and Fairweather2016: 214) helpfully notes that what is grasped in understanding is a structure, such that in understanding, “one apprehends how the various elements of a system depend upon one another, so that one can potentially manipulate the system in various ways”.

Does every piece of information that constitutes the object of understanding need to be true? In other words, is understanding factive in the same way that propositional knowledge is? Elgin (Reference Elgin2007, Reference Elgin2017), for example, argues in favor of a non-factive account, according to which understanding can incorporate felicitous falsehoods. According to her, this is evident in scientific practice, where idealizations and models frequently play a central role. Many of our best scientific theories rely on deliberate falsehoods. For instance, the behavior of real gases is often explained by appeal to ideal gas, a theoretical construct that assumes properties that are hard to discern in actual gases (Elgin Reference Elgin2007: 40).

However, as Hannon (Reference Hannon2021) points out, we can acknowledge the role of idealizations in fostering understanding without committing to the stronger claim that deliberate falsehoods constitute the object of understanding. As he puts it, “a false theory may foster genuine understanding, but the phenomenon that is understood (or the predictions one is attempting to make) must still be true – or at least approximately true” (Reference Hannon2021: 278). If this is correct, we can reconcile Elgin’s insights about the function of idealizations in scientific practice with the factivity of understanding.

This is not the place to settle this debate. For present purposes, I will assume that understanding is factive – that is, that its objects are true propositions. However, not much hinges on this assumption. My central argument can be easily reframed in non-factive terms.Footnote 3

What about the relation between understanding and knowledge? A long-standing debate in the literature concerns whether understanding can be reduced to knowledge-why or knowledge of causes – a view tracing back to Aristotle and widely endorsed in the philosophy of science (see, e.g., Lipton Reference Lipton, de Regt, Leonelli and Eigner2009; Khalifa Reference Khalifa2017). However, a number of epistemologists argue that understanding is not in fact reducible to knowledge (see, e.g., Kvanvig, Reference Kvanvig2003; Elgin Reference Elgin2007, Reference Elgin2017; Pritchard Reference Pritchard2009; Hills Reference Hills2016). Hills (Reference Hills2016: 662), for example, argues that understanding – her focus is on understanding-why – “is different from this standard conception of propositional knowledge: it is consistent with luck, it can be based on defeated evidence and it cannot (easily) be transmitted by testimony”. In this paper, I remain neutral on this issue. My argument does not hinge on whether understanding reduces to a specific type of knowledge, such as knowledge-why. What it does require is the recognition that understanding is not merely the knowledge of a collection of discrete factual propositions, devoid of a grasp of the relationships between them. If some form of knowledge necessarily involves grasping such relationships, then it may well be compatible with the view put forward here that understanding reduces to that form of knowledge.

We are now in position to introduce a rough account of understanding – one that can be applied across different forms of understanding:

Understanding

Understanding is a relation between a subject S and a structured body of (true) information, such that S grasps the relationships among its elements in a way that enables explanation, inference and application across various contexts.

Let me clarify this thesis by applying it to a case of understanding how something works and why it works as it does. Consider Leo, an experienced Formula 1 mechanic. Leo grasps how various components of Formula 1 cars – engine, aerodynamics, suspension, electronics, tires – interact to produce their performance. This enables him to explain how performance is generated and predict how a car will behave under different conditions. He can also explain how and why certain mechanical systems work the way they do: why minor changes in tire pressure affect grip and handling; how aerodynamic tweaks improve cornering speed; why a particular engine misfire is likely caused by an issue in the fuel injection system rather than the ignition system.

Next imagine someone who understands how government agencies such as USAID work and why they work the way they do. More specifically, consider Roberta, a policy analyst specializing in international development. Roberta grasps the institutional structure of USAID – its funding mechanisms, partnerships, bureaucratic processes and relationships with other government agencies, NGOs, and private contractors. She understands how aid is allocated, why certain regions receive more funding than others and how shifting political priorities influence strategic decisions. This allows her to explain and predict a number of aspects of USAID’s operations, such as: why humanitarian aid is often prioritized over long-term development projects in times of crisis; how congressional budget cuts affect staffing and program implementation in different regions; how diplomatic and geopolitical considerations shape development strategies, such as why aid programs in certain countries are expanded while others are scaled back.

These examples illustrate what we can expect from someone who understands how something works and why it works the way it does – whether a complex piece of technology like an F1 car or an institution. Such an individual grasps the relationships between various components in a way that enables them to explain the system’s behavior, anticipate outcomes resulting from changes in its internal workings or external conditions, identify underlying causes of problems, and so on.

Now that we have outlined a preliminary account of understanding, we are in a position to examine its relationship to the phenomenon of disinformation. This is the focus of the next section.

3. Understanding as an epistemic shield

In this section, I will argue that, all else being equal, possessing understanding of how things work and why they work the way they do can act as a safeguard against falling prey to disinformation campaigns.

Let us begin by revisiting Leo. Now, imagine that Leo and other F1 mechanics are the targets of a disinformation campaign aimed at convincing them that a particular car crash was caused by human error rather than a mechanical failure, as they currently believe. More specifically, suppose that Leo and his colleagues are exposed to the following content aimed at reshaping their views on how and why the crash occurred:

Car Crash

A series of articles and social media posts cite well-known former drivers and race strategists arguing that “driver misjudgment, not technical malfunction, was the primary cause of the crash”. Leo and other mechanics are presented with radio excerpts that suggest uncertainty or hesitation in communication before the crash. Moreover, they are presented with a leak, in which team engineers claim the the drivers’s style is too aggressive and might put him in dangerous situations. Leo also starts to receive messages questioning his professional judgment. Some insiders hint that “maybe you’re too close to this” or “perhaps you just want it to be a mechanical failure because that’s what fits your expertise”. Meanwhile, the team’s executives, eager to avoid liability for mechanical failure, begin subtly encouraging mechanics to reconsider their assessment. They emphasize the risks of making claims that could damage the team’s reputation or expose them to regulatory scrutiny.

Suppose that the crash was indeed the result of mechanical failure, so that the content above is one that has a disposition to generate or increase ignorance in normal conditions regarding the causes of the crash, fitting into Simion’s (Reference Simion2024) account of disinformation. As a result of his understanding of how F1 cars work and why they work the way they do, plus the available evidence, Leo is convinced that the crash was the result of mechanical failure alone, with no fault to be attributed to the driver. Is Leo likely to fall prey to the disinformation campaign that targets him and others?

I take it that answer here is clearly “no”. Leo’s understanding of F1 mechanics should allow him to resist the disinformation campaign in several ways. First, it should equip him with the means to assess competing explanations of the crash. Since he grasps how the various components of an F1 car interact – how tire degradation affects grip, how aerodynamic balance conditions handling, and so on – he can judge whether an account of the crash coheres with his body of information on the matter. Moreover, Leo’s understanding should enable him to recognize causal patterns. Mechanical failures follow characteristic trajectories; so do driver errors. If the evidence points to a sudden loss of braking power rather than a delayed reaction time, for example, Leo is in a position to tell that the failure is mechanical and not human. Since he knows how these systems work, he knows what kind of failure to expect when they malfunction. Also, Leo’s epistemic position is strengthened by his ability to assess the evidential weight of different sources. A core epistemic role of understanding is that it should allow one to differentiate between reliable and unreliable sources. The disinformation campaign relies on selectively framed testimonies and rhetorical insinuations – radio excerpts suggesting hesitation, leaks from engineers criticizing the driver’s style. However, Leo knows that such testimonial evidence is epistemically weaker than direct mechanical diagnostics, for instance. Understanding should enable him to give proper epistemic weight to different sources and to recognize that the strongest available evidence supports the mechanical failure hypothesis.

So, given his understanding of how F1 cars work and why they work the way they do, Leo is well-positioned to resist the disinformation campaign unfolding in Car Crash. Yet external pressures – such as institutional threats or potential professional fallout – might compel him to publicly endorse the human error explanation, if only to preserve his standing within the investigation. In more acute cases, these pressures might even lead Leo to want to believe the alternative explanation. More troublingly, Leo might display a disposition to shy away from confrontation or risk at the very moment when he ought to stand firm, thereby undermining his own conviction in the mechanical failure hypothesis. Alternatively, Leo might fall prey to conformity, seeking to adjust his beliefs to align with those of influential colleagues or authoritative voices. Over time, this pull may erode his confidence in the mechanical failure hypothesis, through the slow accumulation of doubt, seeded by those around him.

This illustrates a crucial point: possessing understanding does not render one immune to disinformation. Epistemic vulnerability can stem from more than lacking understanding; it can arise from reduced self-confidence, intellectual cowardice, a disposition toward conformity, and related factors.Footnote 4 Nevertheless, this should not be taken as a counterexample to my central claim that, all else being equal, possessing understanding of how things work and why they work the way they do provides an epistemic shield against disinformation campaigns. When the factors mentioned above enter the picture, conditions are no longer equal. My claim is not that understanding invariably grants immunity against disinformation, but rather that it constitutes a crucial epistemic resource – a resource that, provided it remains uncompromised by external pressures or internal vulnerabilities, for example, significantly enhances one’s epistemic resilience.

Similarly to Leo, Roberta, our policy analyst specializing in international development, who understands how US government agencies such as USAID work and why they work the way they do, should be able to see through the fog in the case of a disinformation campaign against such agencies. While disinformation may be widespread – alleging corruption, inefficiency, or covert operations – Roberta’s grasp of USAID’s funding mechanisms, policy priorities, and bureaucratic constraints equips her to critically evaluate misleading narratives. Of course, external pressures, such as political influence or media saturation, may complicate her epistemic position. But all else being equal, her understanding can function as an epistemic safeguard against such campaigns.

At this point, one might object that both Leo and Roberta are specialists, and it is hardly surprising that they should be somewhat insulated from disinformation campaigns within their respective domains. But what about non-experts? Are they entirely defenseless against such campaigns? Not necessarily. Understanding how things work and why they work the way they do comes in degrees.

Consider an F1 enthusiast, someone obsessed with car mechanics but lacking direct experience with F1 vehicles. They may not be able to diagnose a malfunction by listening to an engine’s sound alone, but they might still be better equipped than most to assess competing explanations regarding the causes of a car crash. Their understanding, though partial, provides some resistance to misleading content. The same holds for someone with a passing but structured grasp of how international aid organization’s function. Even if they lack Roberta’s expertise, they may still recognize certain patterns, inconsistencies or implausibilities in a disinformation campaign. Understanding, even when partial, is often enough to place constraints on what one is willing to accept uncritically.

I would like to suggest, however, that there are cases in which a subject falls below a certain threshold necessary to be considered as having some understanding of how something works and why it works the way it does. In such cases, it is appropriate to regard the subject as lacking understanding. These, I will argue next, are precisely the cases in which individuals are particularly vulnerable to disinformation campaigns.

4. Epistemic vulnerability: when understanding is lacking

I have characterized understanding as relation between a subject S and a structured body of (true) information, such that S grasps the relationships among its elements in a way that enables explanation, inference, and application across various contexts. This broad account accommodates the possibility of partial understanding – cases in which a subject has some grasp of the relationships between elements within a body of information, allowing for a limited degree of explanatory capacity, inference-making, and cross-contextual application. But take away that grasp, and what’s left? If a subject fails to discern these relationships, if they cannot generate explanations, draw meaningful inferences, and apply information beyond the immediate case at hand, then they do not merely have less understanding; they lack it altogether.

In order to illustrate what a lack of understanding may amount to, consider the following case (I will call it Mercury’s Oddities), introduced by Wilkenfeld et al. (Reference Wilkenfeld, Plunkett and Lombrozo2016) in an experimental investigation on when and why we attribute understanding to others.Footnote 5

Mercury’s Oddities

Richard has been taught a great deal about astronomy and astrophysics. For example, if asked, he will tell you a lot of true things about the orbit of Mercury. He can accurately report Mercury’s mass, volume, and average distance from the sun, and that Mercury’s apparent orbit has some oddities. Richard can also tell you a lot of things about the theory of general relativity. General relativity explains some oddities in Mercury’s observed orbit, but Richard was never told and never makes that connection, so he could not say anything about what explains those oddities. Everything Richard says about Mercury and general relativity is true, though. Nonetheless, he does not see any connections between the things he says about Mercury and about general relativity (Wilkenfeld et al. Reference Wilkenfeld, Plunkett and Lombrozo2016: 380).

Participants in Wilkenfeld et al. (Reference Wilkenfeld, Plunkett and Lombrozo2016) were hesitant to attribute understanding to Richard in this scenario – what the authors refer to as the ignorant condition – and for good reason. We are told that Richard knows plenty of facts regarding Mercury, its orbit, and general relativity. But something crucial is missing. He fails to see how these facts connect, how they come together to explain Mercury’s peculiar orbital behavior. The structure is there, the pieces are in place, but the grasp is absent. Because he cannot integrate this information into a coherent explanatory framework, he is unable to generate explanations, derive meaningful inferences, or apply what he knows to new contexts. Thus, it seems that Richard does not understand why Mercury has some oddities in its observed orbit, or more generally, why Mercury’s orbit behaves the way it does.

I would now like to suggest that, despite knowing numerous facts about Mercury’s orbit and general relativity, Richard’s lack of understanding—his failure to grasp the connection between these facts and why Mercury’s orbit behaves as it does—may leave him vulnerable to disinformation campaigns. Consider the following case, which I will call Dark Twin:

Dark Twin

Richard comes across a series of online articles and videos questioning the mainstream explanation of Mercury’s orbital behaviour. A popular alternative science website claims that the peculiarities in Mercury’s motion are not due to relativistic effects but rather to an undisclosed celestial body—a “dark twin” of Mercury hidden from conventional detection. The articles are well-crafted, laced with technical jargon, and cite real astronomical anomalies, giving them a veneer of credibility. The website presents interviews with supposed experts, arguing that physicists have misled the public about general relativity for decades and that the “hidden twin” hypothesis is being suppressed by the scientific establishment. Lacking an understanding of why general relativity explains Mercury’s orbit, Richard is ill-equipped to critically evaluate these claims. The disinformation campaign exploits his epistemic gap: because he knows many isolated facts but has never grasped the explanatory structure that unites them, he cannot easily see why the mainstream account is correct. Instead, he finds the alternative explanation intriguing. He reasons: “I’ve always known Mercury’s orbit is strange. Maybe relativity isn’t the full story. And why do physicists just expect us to accept their word on it?” Without a coherent grasp of the scientific explanation, the disinformation gains traction in his mind. Over time, Richard becomes increasingly skeptical of general relativity’s role in planetary dynamics. He starts frequenting forums that challenge mainstream astrophysics and dismiss experts as biased or part of an academic echo chamber. His initial curiosity hardens into conviction: the official explanation is flawed, and something is being hidden from the public. Disinformation has done its work.

Dark Twin seems plausible. Richard, despite possessing extensive factual knowledge, is unable to integrate those facts in a way that enables explanation, inference, and application across various contexts when it comes to the question of why Mercury has some oddities in its observed orbit, or why Mercury’s orbit behaves the way it does. The crucial point is this: had Richard understood why general relativity explains Mercury’s motion, the dark twin hypothesis would very likely have struck him as implausible from the start. He would have recognized that general relativity predicts the precise deviations in Mercury’s orbit and that there is no leftover anomaly crying out for a new celestial body. But because his knowledge is compartmentalized and devoid of a grasp of the relationships among its elements, he is left epistemically vulnerable – ripe for manipulation by a well packaged but misleading alternative.

If this is so, we can conclude that factual knowledge alone, without understanding, may not offer enough protection against disinformation. A subject may possess an extensive repertoire of factual knowledge while lacking the structured grasp that enables explanation, inference, and cross-contextual application. Without this integrative grasp, even a well-informed individual may be unable to distinguish between a well-supported theory and a misleading but superficially plausible alternative. Thus, it is not mere factual knowledge but understanding that serves as an epistemic safeguard against disinformation campaigns.

5. Disinformation and the exploitation of ignorance

As noted earlier, Simion (Reference Simion2024) offers an account of disinformation in terms of its disposition to generate or increase ignorance. More specifically, she defines disinformation as follows: “X is disinformation in a context C iff X is a content unit communicated at C that has a disposition to generate or increase ignorance at C in normal conditions” (2023: 1213).

Simion (Reference Simion2024:1214) identifies different ways in which disinformation can generate or increase ignorance:

  • By presenting content that induces false beliefs.

  • By providing content that undermines knowledge by defeating justification.

  • By inducing epistemic anxiety.

  • By reducing doxastic confidence or defeating justification.

  • By conveying false implicatures.

What I want to suggest is that these means of ignorance generation or amplification often exploit a fundamental epistemic deficit: a prior lack of understanding of how things work and why they work the way they do. In many cases, disinformation’s effectiveness depends on an epistemic landscape already shaped by gaps in understanding. When individuals lack a grasp of the relationships between relevant facts, they are more susceptible to misinformation that distorts those connections, undermines justifications, or exploits epistemic insecurities.

The case of Richard illustrates this point. Richard possesses an extensive collection of factual knowledge about Mercury’s orbit and general relativity, yet his failure to grasp how these facts interconnect leaves him vulnerable to the dark twin hypothesis. Because he does not understand why general relativity explains Mercury’s orbital anomalies, he does not immediately recognize the hypothesis as implausible. The disinformation campaign exploits this epistemic gap—Richard is prone to being misled by false claims precisely because of his lack of understanding.

Of course, other forms of ignorance may also facilitate the spread of disinformation. A straightforward case is a sheer lack of factual knowledge. If one has never encountered accurate information about a given topic, one may be at the mercy of whatever claims happen to reach them first. Someone who has never heard of USAID before, for example, may be more likely to accept without question a claim that it is a covert intelligence operation rather than a development agency. Similarly, an individual unfamiliar with the principles of epidemiology may be easily swayed by misleading claims about vaccines.Footnote 6 Moreover, ignorance about the mechanisms of disinformation itself can further entrench susceptibility. If one is unaware of common tactics – such as the deliberate manufacture of controversy, selective presentation of evidence, or emotionally charged framing – one is less likely to recognize when one is being manipulated.

My aim here is not to provide an exhaustive taxonomy of the various forms of ignorance that disinformation exploits. My claim is a more modest one: the means by which disinformation generates or amplifies ignorance often rely on a pre-existing lack of understanding – specifically, a failure to grasp how things work and why they work the way they do. This does not imply that other forms of ignorance – such as a lack of mere factual knowledge, or unfamiliarity with disinformation tactics – do not also play a crucial role.

6. The challenge of acquiring understanding and the limits of deference

Understanding is typically difficult to attainFootnote 7 – certainly more so than mere factual knowledge. I can come to know that there is a cat on the table simply by opening my eyes, but I cannot acquire an understanding of how F1 cars work and why they work as they do in anything like the same effortless manner. Understanding requires more than simple observation; it demands integration of information, and a grasp of the underlying relationships that structure a given domain. As put by Hazlett, according to many epistemologists,Footnote 8 “understanding requires a suite of intellectual abilities, including the ability to give and follow explanations or to provide theoretical accounts, the ability to draw conclusions about similar cases, and the ability to answer explanatory and modal questions” (Reference Hazlett, Lackey and McGlynn2025: 609). But developing these abilities is far from straightforward. It certainly takes patience, practice, and often expert guidance. It can be impeded by cognitive biases, limited attention, unreliable information sources, or simply by the many other demands competing for our time.

Thus, we may appear to be in a bind. Understanding can shield us from disinformation campaigns, and so fostering it is surely desirable. Yet if understanding is characteristically so challenging to acquire, relying on widespread improvements in public understanding as our main defense against disinformation may be overly optimistic.

At this point, however, one might object that the problem is not as deep as it initially appears. After all, we can defer to those who understand how something works and why it works the way it does—or at least to someone who understands it better than we do. If understanding is difficult to attain firsthand, perhaps our best strategy is to rely on those who already possess it.

Consider Richard once again. Despite possessing a wealth of factual knowledge about Mercury’s orbit and general relativity, he lacks the understanding necessary to integrate these facts into a coherent framework. This epistemic gap leaves him vulnerable to the dark twin hypothesis. But suppose Richard, recognizing his own limitations, decides to defer to an expert in celestial mechanics—someone who not only knows the relevant facts but also grasps how they fit together to explain Mercury’s orbital behavior. By deferring to this expert, Richard could avoid being misled by the disinformation campaign, even without personally attaining the relevant understanding.

This suggests a possible solution: when we lack understanding, we can mitigate our epistemic vulnerability by deferring to those who possess it. If understanding shields one from disinformation, then relying on those who understand – scientists, engineers, historians, and policy analysts—should provide at least some degree of protection. In principle, Richard need not understand why general relativity explains Mercury’s orbit to reject the dark twin hypothesis; it should suffice for him to trust those who do.

Of course, for Richard to successfully shield himself from disinformation campaigns through deference to experts, he must first reliably identify who the genuine experts are. Without this ability, his deference risks being misdirected – potentially reinforcing rather than dispelling his epistemic vulnerability. But how should he go about doing so? The problem of recognizing expertise has received some attention in the epistemological literature (see, e.g., Goldman Reference Goldman2001; Anderson Reference Anderson2011; Guerrero Reference Guerrero2007; McKenna Reference McKenna2023). Several criteria have been proposed as indicators of genuine expertise, including formal credentials and institutional backing, a proven track record of accuracy, argumentative rigor, and alignment with the broader expert consensus.

However, as Levy (Reference Levy2022) observes, we live in epistemically polluted environments where “the cues for expertise don’t correlate well with its actual possession” (Levy Reference Levy2022: 112). A central feature of such environments is the proliferation of pseudo-expertise – where individuals and institutions successfully mimic the outward signs of genuine expertise, whatever they happen to be. This epistemic mimicry makes it increasingly difficult for laypersons to distinguish between those who in fact understand how things work and why they work the way they do, and those who merely play the role of an expert without the requisite understanding.

A striking example of epistemic mimicry can be found in Merchants of Doubt (Oreskes and Conway Reference Oreskes and Conway2010), which meticulously documents how individuals without genuine expertise successfully played the role of scientific authorities to mislead the public on issues such as tobacco, climate change, and acid rain. They did not need to possess genuine understanding of the phenomena in question; they only needed to appear authoritative enough to sow confusion.

In epistemically polluted environments, as Levy (Reference Levy2022) warns, the usual indicators of expertise—academic credentials, institutional affiliations, or media presence – may be strategically co-opted by non-experts to project credibility. The result is an erosion of the reliability of traditional epistemic cues, making it increasingly difficult for laypersons to separate genuine expertise from its mere performance. When disinformation campaigns exploit this mimicry, they do not merely introduce falsehoods; they create an epistemic fog in which deference to expertise becomes unreliable.

This complicates Richard’s situation. If he were to attempt to shield himself from disinformation through deference to expertise, he would first need to navigate an epistemic landscape rife with counterfeit authorities. Lacking understanding himself, he is especially vulnerable to mistaking the merchants of doubt for genuine experts. The very strategy that could, in principle, protect him – trusting those who understand – may in practice lead him deeper into epistemic entanglement.

The broader lesson is clear: in an environment where the cues of expertise can be co-opted, mere deference is not enough. The capacity to identify genuine understanding – and to recognize when expertise is merely being mimicked – is itself an epistemic skill. Of course, possessing understanding of how something works and why it works the way it does may by itself allow one to identify experts who have an even deeper grasp of it. If I understand how government works, I am in a better position to recognize a true specialist on government policy, say. But here we confront the same problem we started with: understanding is typically difficult to attain, and so most of us must rely on deference. Yet if deference alone is unreliable in polluted epistemic environments, where does that leave us?

The task may seem to be clear: we must clean up the epistemic environment. But doing so is no simple matter. As Levy (Reference Levy2022) puts it, we are faced with the problem that “while everyone might be better off if no one pollutes, no individual can make a significant difference on their own, and any individual who pays the cost of clean-up locally is worse off than others who don’t cooperate’ (Reference Levy2022: 127). Cleaning up the epistemic environment, then, requires large-scale cooperation. And large-scale cooperation, in turn, ‘may require some degree of coercion, from government or other institutions with the clout to impose costs on those who don’t cooperate” (Levy Reference Levy2022: 127). What policies or regulations might help address this issue is, of course, beyond the scope of this paper.Footnote 9

Are we, then, in a hopeless predicament – fated to remain easy prey for disinformation – if understanding is difficult to attain and cleaning up our epistemic environment equally challenging? Perhaps not. From a more optimistic vantage point, we might recall that neither task is utterly beyond our grasp. Cleaning up the epistemic environment may yet prove achievable, provided we identify and implement effective policies and regulations. Likewise, even if robust understanding is demanding, understanding comes in degrees. Some grasp of how things work and why they work the way they do – even if partial or incomplete – is surely preferable to none at all.

Moreover, even if understanding cannot simply be passed on through testimony from those who possess it,Footnote 10 testimony can nevertheless help promote it. Gordon (Reference Gordon, Grimm, Baumberger and Ammon2016), for instance, argues that speakers can facilitate understanding in their audience by encouraging “(i) the acquisition of new true beliefs, (ii) the rejection of false beliefs, (iii) the grasping of new connections (and rejection of mistaken connections), (iv) overcoming blocks to grasping, and (v) the acquisition or enhancement of abilities linked to grasping” (Reference Gordon, Grimm, Baumberger and Ammon2016: 294). Thus, even if testimony alone does not transmit understanding, it can still pave the way for its acquisition.

7. Conclusion

In this paper, I have examined the relationship between disinformation and understanding – specifically, understanding how things work and why they work the way they do. I have argued that such understanding, all else being equal, can shield one from disinformation campaigns. Conversely, I have argued that a lack of such understanding leaves one particularly vulnerable. If my argument is on the right track, fostering public understanding may seem an obvious priority. But understanding is not easily attained. So what should we do?

One answer is to defer to experts – those who understand things better than we do. But in epistemically polluted environments, where disinformation is widespread, deference alone is not enough. The usual markers of expertise can be co-opted, and the cues we rely on to identify genuine understanding may be unreliable. This suggests two paths forward: one is to clean up the epistemic environment so that expertise can be properly recognized and deference can function as it should; the other is to foster conditions that make understanding more widely accessible, even if such understanding is partial or incomplete.

Both tasks are undeniably challenging. But if we are to take the dangers of disinformation seriously, then we must take these challenges seriously too.Footnote 11

Footnotes

1 Elon Musk (@elonmusk), “USAID is a criminal organization. Time for it to die”. X (formerly Twitter), February 2, 2025, https://x.com/elonmusk/status/1886102414194835755.

2 In this paper, I work with Simion’s (Reference Simion2024) account of disinformation. However, my argument does not hinge on this specific framework. Alternative accounts characterize disinformation in terms of an intent to mislead people or content that has the function to mislead people, for example (see Floridi Reference Floridi2011; Fallis Reference Fallis2015). The central claims of this paper remain compatible with a range of theoretical approaches to disinformation.

3 For factive accounts of understanding, see, for instance, Lipton (Reference Lipton, de Regt, Leonelli and Eigner2009), Grimm (Reference Grimm2006), Pritchard (Reference Pritchard2009), Strevens (Reference Strevens2013), Greco (Reference Greco, Timpe and Boyd2014) and Hills (Reference Hills2016).

4 These may be classified as intellectual vices, in the sense, as put by Cassam (Reference Cassam2016: 164) “of character traits that impede effective and responsible inquiry”. According to Zagzebski (Reference Zagzebski1996: 152), such vices include, for example, “intellectual pride, negligence, idleness, cowardice, conformity, carelessness, rigidity, prejudice, wishful thinking, closed-mindedness, insensitivity to detail, obtuseness, and lack of thoroughness”. For an extended treatment of the topic, see Cassam (Reference Cassam2019).

5 According to their results, people have high demands with respect to explanatory depth when it comes to attributing understanding, higher than those required for attributing knowledge.

6 An anonymous reviewer insightfully observed that individuals with fragmented factual knowledge—such as Richard in Dark Twin—may, in some cases, be more vulnerable to disinformation than those who are more thoroughly ignorant of how things work and why they work the way they do. Disinformation can gain traction by stitching together disparate pieces of information into a seemingly coherent explanatory framework, thereby manufacturing an illusion of understanding. This pseudo-understanding can be especially persuasive: it gives the impression of grasp while concealing explanatory shallowness. By contrast, someone who lacks even this fragmented knowledge—who has no idea that Mercury’s orbit exhibits anomalies, and no inkling of general relativity—may dismiss the dark twin hypothesis as irrelevant, implausible, or simply unintelligible. Thus, although I maintain that the sheer lack of factual knowledge can leave one vulnerable to disinformation, there may be cases in which such ignorance provides a kind of epistemic insulation, while those with fragmented knowledge may be more easily drawn in by the illusion of understanding.

7 Boyd (Reference Boyd2017) argues that although the literature has mostly focused on understanding that is difficult to attain, there is such a thing as easy understanding. As he puts it, “in cases of easy understanding, all of the mechanisms that are needed to grasp the relevant information, or connections between the relevant bits of information, are already activated when processing the relevant information. By possessing the relevant background information and concepts, then by successfully processing my testimony that, e.g., “to play Pong, move the paddle to hit the square,” you have thereby done enough to grasp the relevant information or connections between bits of information, namely those connections between the information that I am conveying to you through testimony and the background information that you already possess” (Boyd Reference Boyd2017: 16). For my purposes, it is enough to claim that understanding is typically difficult to attain. I do not need to take a stronger stance by claiming that it is necessarily so.

8 See, for example, Zagzebsk (Reference Zagzebsk and Steup2001), Kvanvig (Reference Kvanvig2003), Grimm (Reference Grimm2006), and Hills (Reference Hills2016).

9 Levy (Reference Levy2022) proposes several measures to mitigate epistemic pollution in scientific environments. These include reducing the prevalence of predatory and fake journals, strengthening support for research replication, and curbing incentives for premature publicity through “science by press release”, which often distorts findings by exaggerating their significance.

10 Hazlett (Reference Hazlett, Lackey and McGlynn2025) usefully distinguishes between two perspectives on the relationship between testimony and understanding. Testimonial understanding pessimism is the view that understanding cannot be transmitted by testimony in the straightforward manner in which propositional knowledge typically can. Conversely, testimonial understanding optimism holds that understanding can indeed be so transmitted. While pessimism is currently the dominant view (see, e.g., Zagzebski Reference Zagzebski2012; Hills Reference Hills2016; Grimm Reference Grimm2006; Pritchard Reference Pritchard, Pritchard, Millar and Haddock2010), optimism has attracted its share of proponents as well (e.g., Boyd Reference Boyd2017; Hazlett Reference Hazlett and Grimm2018; Malfatti Reference Malfatti2019).

11 I am grateful to the members of REDD (Network for the Study of Disinformation and Democracy) and to an anonymous reviewer for their insightful comments and suggestions on an earlier version of this paper.

References

Anderson, E. (2011). ‘Democracy, Public Policy, and Lay Assessments of Scientific Testimony.Episteme 8(2), 144–64.10.3366/epi.2011.0013CrossRefGoogle Scholar
Baumberger, C., Beisbart, C. and Brun, G. (2016). ‘What is Understanding? An Overview of Recent Debates in Epistemology and Philosophy of Science.’ In Grimm, S., Baumberger, C. and Ammon, S. (eds), Explaining Understanding. New Perspectives from Epistemology and Philosophy of Science, pp. 134. New York: Routledge.Google Scholar
Boyd, K. (2017). ‘Testifying Understanding.Episteme 14(1), 103–27.10.1017/epi.2015.53CrossRefGoogle Scholar
Burke, P. (2023). Ignorance: A Global History. New Haven: Yale University Press.Google Scholar
Cassam, Q. (2016). ‘Vice Epistemology.The Monist 99, 159–80.10.1093/monist/onv034CrossRefGoogle Scholar
Cassam, Q. (2019). Vices of the Mind: From the Intellectual to the Political. Oxford: Oxford University Press.10.1093/oso/9780198826903.001.0001CrossRefGoogle Scholar
Elgin, C. (2007). ‘Understanding and the Facts.Philosophical Studies 132(1), 3342.10.1007/s11098-006-9054-zCrossRefGoogle Scholar
Elgin, C. (2017). True Enough. Cambridge, MA: The MIT Press.10.7551/mitpress/9780262036535.001.0001CrossRefGoogle Scholar
Fallis, D. (2015). ‘What is Disinformation?Library Trends 63(3), 401–26.10.1353/lib.2015.0014CrossRefGoogle Scholar
Floridi, L. (2011). The Philosophy of Information. Oxford: Oxford Scholarship Online.10.1093/acprof:oso/9780199232383.001.0001CrossRefGoogle Scholar
Goldman, A. (2001). ‘Experts: Which Ones Should You Trust?Philosophy and Phenomenological Research 63(1), 85110.10.1111/j.1933-1592.2001.tb00093.xCrossRefGoogle Scholar
Gordon, E.C. (2012). ‘Is there Propositional Understanding?Logos and Episteme 3, 81192.10.5840/logos-episteme20123234CrossRefGoogle Scholar
Gordon, E.C. (2016). ‘Social Epistemology and the Acquisition of Understanding.’ In Grimm, S., Baumberger, C. and Ammon, S. (eds), Explaining Understanding. New Perspectives from Epistemology and Philosophy of Science, pp. 293317. New York: Routledge.Google Scholar
Greco, J. (2014). ‘Episteme: Knowledge and Understanding.’ In Timpe, K. and Boyd, C. (eds), Virtues and their Vices, pp. 285302. New York: Oxford University Press.Google Scholar
Grimm, S. (2006). ‘Is Understanding a Species of Knowledge?British Journal for the Philosophy of Science 57(3), 515–35.10.1093/bjps/axl015CrossRefGoogle Scholar
Grimm, S. (2008). ‘Explanatory Inquiry and the Need for Explanation.British Journal for the Philosophy of Science 59(3), 481–97.10.1093/bjps/axn021CrossRefGoogle Scholar
Grimm, S. (2016). ‘Understanding as Knowledge of Causes.’ In Fairweather, A. (ed), Virtue Epistemology Naturalized: Bridges between Virtue Epistemology and Philosophy of Science, pp. 215–30. Dordrecht: Springer.Google Scholar
Guerrero, A. (2007). ‘Don’t Know, Don’t Kill: Moral Ignorance, Culpability, and Caution.Philosophical Studies 136(1), 5997.10.1007/s11098-007-9143-7CrossRefGoogle Scholar
Hannon, M. (2021). ‘Recent Work in the Epistemology of Understanding.American Philosophical Quarterly 58(3), 269–90.10.2307/48616060CrossRefGoogle Scholar
Hazlett, A. (2018). ‘Understanding and Structure.’ In Grimm, S. (ed), Making Sense of the World: New Essays on the Philosophy of Understanding, pp. 135–58. Oxford: Oxford University Press.Google Scholar
Hazlett, A. (2025). ‘Understanding and Testimony.’ In Lackey, J. and McGlynn, A. (eds), The Oxford Handbook of Social Epistemology, pp. 600–19. Oxford: Oxford University Press.10.1093/oxfordhb/9780190949945.013.29CrossRefGoogle Scholar
Hills, A. (2016). ‘Understanding Why.Noûs 50(4), 661–88.10.1111/nous.12092CrossRefGoogle Scholar
Khalifa, K. (2012). ‘The Role of Explanation in Understanding.British Journal for the Philosophy of Science 64(1), 161–87.10.1093/bjps/axr057CrossRefGoogle Scholar
Khalifa, K. (2017). Understanding, Explanation, and Scientific Knowledge. New York: Cambridge University Press.10.1017/9781108164276CrossRefGoogle Scholar
Kvanvig, J. (2003). The Value of Knowledge and the Pursuit of Understanding. New York: Cambridge University Press.10.1017/CBO9780511498909CrossRefGoogle Scholar
Kvanvig, J. (2018). ‘Knowledge, Understanding, and Reasons for Belief.’ In Starr, D. (ed), The Oxford Handbook of Reasons and Normativity, pp. 685705. New York: Oxford University Press.Google Scholar
Levy, N. (2022). Bad Beliefs: Why They Happen to Good People. Oxford: Oxford University Press.Google ScholarPubMed
Lipton, P. (2009). ‘Understanding without Explanation.’ In de Regt, H., Leonelli, S. and Eigner, K. (eds), Scientific Understanding: Philosophical Perspectives, pp. 4363. Pittsburgh: University of Pittsburgh Press.10.2307/j.ctt9qh59s.6CrossRefGoogle Scholar
Malfatti, F.I. (2019). ‘Can Testimony Generate Understanding?Social Epistemology 33(6), 477–90.10.1080/02691728.2019.1628319CrossRefGoogle Scholar
McKenna, R. (2023). Non-Ideal Epistemology. Oxford: Oxford University Press.10.1093/oso/9780192888822.001.0001CrossRefGoogle Scholar
Oreskes, N. and Conway, E.M. (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. London: Bloomsbury Press.Google Scholar
Pritchard, D. (2009). ‘Knowledge, Understanding and Epistemic Value.Royal Institute of Philosophy Supplements 64, 1943.10.1017/S1358246109000046CrossRefGoogle Scholar
Pritchard, D. (2010). ‘Knowledge and Understanding.’ In Pritchard, D., Millar, A. and Haddock, A. (eds), The Nature and Value of Knowledge: Three Investigations, pp. 387. Oxford: Oxford University Press.10.1093/acprof:oso/9780199586264.001.0001CrossRefGoogle Scholar
Sanger, D.E. and Wong, E. (2025, February 7). U.S.A.I.D. becomes a target for conspiracy theories and disinformation. The New York Times. https://www.nytimes.com/2025/02/07/business/usaid-conspiracy-theories-disinformation.html.Google Scholar
Simion, M. (2024). ‘Knowledge and Disinformation.Episteme 21, 1208–19.10.1017/epi.2023.25CrossRefGoogle Scholar
Strevens, M. (2013). ‘No Understanding without Explanation.Studies in History and Philosophy of Science Part A 44(3), 510–15.10.1016/j.shpsa.2012.12.005CrossRefGoogle Scholar
Wilkenfeld, D.A., Plunkett, D. and Lombrozo, T. (2016). ‘Depth and Deference: When and Why We Attribute Understanding.Philosophical Studies 173(2), 373–93.10.1007/s11098-015-0497-yCrossRefGoogle Scholar
Zagzebski, L. (1996). Virtues of the Mind: An Inquiry into the Nature of Virtue and the Ethical Foundations of Knowledge. Cambridge: Cambridge University Press.10.1017/CBO9781139174763CrossRefGoogle Scholar
Zagzebsk, L. (2001). ‘Recovering Understanding.’ In Steup, M. (ed), Knowledge, Truth and Duty, pp. 235–56. Oxford: Oxford University Press.10.1093/0195128923.003.0015CrossRefGoogle Scholar
Zagzebski, L. (2012). Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief. Oxford: Oxford University Press.10.1093/acprof:oso/9780199936472.001.0001CrossRefGoogle Scholar