Hostname: page-component-6bf8c574d5-qdpjg Total loading time: 0 Render date: 2025-02-26T13:59:52.009Z Has data issue: false hasContentIssue false

Should Science Journalists Know Science?

Published online by Cambridge University Press:  26 February 2025

Viviane Fairbank*
Affiliation:
Arché Philosophical Research Centre for Logic, Language, Metaphysics and Epistemology, St Andrews and Stirling Graduate Programme in Philosophy
Rights & Permissions [Opens in a new window]

Abstract

Which additional epistemic skills or attributes must a competent journalist possess in order to produce competent science journalism? I aim to answer this question by bringing together insights from journalism, science communication, and epistemology. In Section 2, I outline the Epistemic Challenge for Science Journalism. In Section 3, I present the dominant answer in the literature, the Knowledge-Based Solution, and argue against it. In Section 4, I propose an alternative, the Confirmation-Based Solution. In Section 5, I argue that this solution can address recent concerns regarding journalistic objectivity. Section 6 discusses my proposal in the context of epistemological debates about norms of assertion. Section 7 concludes.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Canadian Journal of Philosophy, Inc

1. Introduction

Journalism scholars and educators have long abandoned the idea that the only skills required for journalism are those of gathering information and presenting it to the public—of observing, interviewing, and storytelling. This simple conception of journalism has been criticized by those keenly aware of failures in science reporting, who have come to emphasize that narrative aggregation is not enough: what matters is how journalists assess the information they gather, and which sources (human or documentary) they consider fit to provide it.

The controversy created by one disgraced researcher’s unfounded claims of a link between the measles, mumps, and rubella (MMR) vaccine and autism is a tired example of these difficulties. Despite scientific evidence to the contrary, many journalists in the 2000s reported on the issue as an open question that required equal consideration of “both sides”: those who thought there was a causal link between vaccines and autism, and those who did not. Certain politicians and anti-vaccine activists cited these media articles in their efforts to prompt regulative changes in health care. Geographic clusters of parents who refused vaccination for their children appeared, and measles outbreaks and clusters in the United Kingdom, United States, and Canada were attributed to the non-vaccination of children (Rao & Andrade, Reference Rao and Andrade2011).

Many blamed journalists for these avoidable, unfortunate results: “The research never justified the media’s ludicrous overinterpretation,” Goldacre (Reference Goldacre2008) wrote. “If they had paid attention, the scare would never have even started” (244). The criticism is not simply that journalists reported on the issue with false balance, treating both sides of the issue as equally worthy of consideration. Science journalists ought not “air both sides of a debate when only one is correct” (Blum, Hatch, & Jackson, Reference Blum, Hatch and Jackson2020, 76), but the important point here is what is presumed to have caused such an error in judgment. Despite their best intentions, journalists had failed to consider readily available scientific evidence regarding vaccines and autism. Perhaps they did not know how to assess the available evidence, they were misled by their sources, or they wrongly judged the sufficiency of their investigations; either way, a deficiency in their understanding of the topic had led them to identify as scientifically legitimate and open for debate a controversy that was neither of these things.Footnote 1

The inclination is to interpret the MMR-vaccine case as showing not the impossibility of good journalism on politicized scientific issues, but rather the dangers of bad journalism: if science journalists had done their job properly, the idea goes, their work would have served to illuminate scientific knowledge instead of obfuscating it—to defuse unnecessary controversy and public confusion concerning scientific issues. A competent science journalist should therefore have skills beyond information-gathering and storytelling that allow them to navigate the complex landscape bridging science and society.

I am interested in this normative perspective on science journalism. In what sense did journalists fail to grasp the relevant science regarding vaccines and autism, as their critics have claimed? More generally, to what epistemic norms ought we hold science journalists today? I call this the Epistemic Challenge for Science Journalism (ECSJ).

In this article, I aim to answer the ECSJ by bringing together insights from different disciplines: (i) practicing journalists and journalism educators; (ii) scholars of science communication, who are interested in the proper dissemination of scientific information; and (iii) epistemologists, who are interested in the philosophical study of knowledge. I describe this work as a project in applied epistemology (cf. Lackey, Reference Lackey2021). On the account I propose, science journalists—like all journalists—are governed by what I call a constitutive norm of confirmation: competent science journalists have “checked” the scientific facts they report. The central notion is thus confirmation, which I understand in a distinctive way, inspired by recent research on norms of editorial fact-checking.Footnote 2

Here is the plan. In Section 2, I detail the MMR-vaccine case alluded to above and outline the ECSJ. In Section 3, I present the dominant answer to the ECSJ in the science-communication and journalism-education literature, which I call the Knowledge-Based Solution, and I argue that it is unconvincing: knowledge of science (as it is conceptualized by these authors) is neither sufficient nor necessary for good science journalism. In Section 4, I propose an alternative, which I call the Confirmation-Based Solution, and I show that it is able to make sense of the MMR-vaccine case and other cases in a satisfying way. In Section 5, I turn my attention to recent debates about journalistic objectivity, and I argue that the Confirmation-Based Solution can respond to important concerns voiced by journalists and their audiences. Finally, Section 6 discusses my proposal in the context of philosophical debates about epistemic norms of assertion, and I argue that it represents a distinctive way of carving up the “epistemic territory” in nonideal contexts. Section 7 concludes.

2. MMR-Vaccine Case and the ECSJ

The MMR-vaccine trouble stems from 1998, when Andrew Wakefield, a physician who has since been banned from practicing medicine in Britain, and colleagues published a study in The Lancet claiming a possible causal link between the MMR vaccine and autism-related symptoms in children. At the time, the scientific and medical communities did not pay much attention to the paper; a professional consensus had already been reached that vaccines were safe, and Wakefield’s study did not provide convincing evidence to the contrary.Footnote 3 Science journalists largely followed suit, at least at first, careful to report on the story in its proper context and with the appropriate caveats (Goldacre, Reference Goldacre2008).

However, there was already growing anxiety among the public about the safety of childhood vaccines and possible environmental causes of autism (Goldacre, Reference Goldacre2008; Lewis & Speers, Reference Lewis and Speers2003; Offit, Reference Offit2007), and political actors soon intervened to cause a media fanfare (Brainard, Reference Brainard2013; Li, Stroud, & Jamieson, Reference Li, Stroud, Jamieson, Jamieson, Kahan and Scheufele2017; Rao & Andrade, Reference Rao and Andrade2011). By 2005, journalists in the United Kingdom and the United States were publishing articles about the “legitimate scientific debate” regarding vaccines (a quote from Schulman, Reference Schulman2005), most of which relied heavily on Wakefield’s claims as well as the testimony of concerned parents.

At the same time, the research that led to the 1998 paper was exposed as fraudulent, largely due to the efforts of an investigative journalist at The Sunday Times Magazine in the UK, Brian Deer, who spent years analyzing Wakefield and his colleagues’ scientific misconduct, including the unethical treatment of children, undisclosed conflicts of interest, and doctored medical histories (Godlee, Reference Godlee2011; Rao & Andrade, Reference Rao and Andrade2011).Footnote 4 Deer’s reporting, which led to The Lancet’s retraction of the paper in 2010, served to dampen the media’s practice of naïve “he-said/she-said” reporting on MMR vaccines. It is worth emphasizing, however, that Wakefield’s claims had never been taken seriously by the scientific community; by the time the paper was retracted, Wakefield’s exposure as fraudulent served only as ammunition for one side of a debate that was taking place within media, politics, and popular culture—not within science. As Brainard (Reference Brainard2013) writes for the Columbia Journalism Review:

Only a small group of [science] researchers ever even entertained the theory about autism. The [media] coverage rarely emphasized this, if noted it at all, and instead propagated misunderstanding about vaccines and autism and gave credence to what was largely a manufactured controversy.

Still today, the MMR-vaccine case is considered one of the most serious examples of flawed scientific journalism (see, e.g., Lewis & Speers, Reference Lewis and Speers2003; Mnookin, Reference Mnookin2011; Allsop, Reference Allsop2020).Footnote 5 A number of factors conspired to create this situation—including political, financial, and other structures, many of which were beyond any individual journalist’s control. However, in this article, I am interested in focusing on a subset of concerns voiced by certain commentators, journalism educators, and scholars of science communication: those related to journalists’ epistemic responsibilities when covering scientific stories, and what can go wrong when those responsibilities are not met.Footnote 6 This focus on the epistemic norms of journalism corresponds to how a number of different disciplines—including journalists (Blum et al., Reference Blum, Hatch and Jackson2020; Rensberger, Reference Rensberger2000), educators (Kovach and Rosenstiel, Reference Kovach and Rosenstiel2021), science communicators (Li et al., Reference Li, Stroud, Jamieson, Jamieson, Kahan and Scheufele2017), and applied epistemologists (Simion, Reference Simion2017)—have thought about the norms of journalism. Although each discipline brings its own perspective to the issue, there is broad agreement that the problem exemplified by the MMR-vaccine case is, at least in part, an epistemic one.

Journalists covering any science story, whether politicized or not, are expected to make a number of informed choices—regarding which story to report, whom to interview, which documentary sources to consult, which facts to feature and in which order, when to publish, and so on. All of these decisions affect whether a story will be considered epistemically defensible—by which I mean, as a first gloss, whether it will be considered reasonable, rigorous, and responsible science journalism. In the case of MMR-vaccine coverage, the criticism goes, the controversy’s severity and persistence were caused in part by science journalists’ poor choices, including their flawed assessment of what could be considered a scientific fact and whether Wakefield was qualified to provide them. These poor choices are at the heart of the criticism of “false balance” in media coverage of the issue.Footnote 7

Of course, the condemnation of these journalists’ choices as “poor” is a defensive attitude, formed in response to those critics of journalism who interpret the MMR-vaccine case as evidence that journalists are ill-suited to report on scientific stories at all. Footnote 8 These critics complain that most journalists do not have a scientific background and are not part of the scientific research community; moreover, they are bound to norms of relevance and newsworthiness in publishing that may disincentivize epistemically responsible pursuits. Especially in the context of polarization and science denialism, there may be no good reason to think that journalists are ever qualified to communicate scientific findings to the public.

I find this cynical attitude toward journalism unappealing. It is best not to prematurely dismiss the potential of competent science journalism—and, given the normative framework with which I am operating, it is also unnecessary. The goal of epistemology of journalism, as I see it, is not to provide a descriptive account of journalistic practices that accommodates all people who have once called themselves journalists. Rather, the goal is to articulate what journalists and people who study journalism think journalism can and should be, and the norms to which journalists hold themselves and their peers today.

Most journalists accept the criticism of past MMR-vaccine coverage but nevertheless insist that their profession is capable of reporting on such topics in principle; they protest that those who reported on the vaccine–autism link as a “legitimate scientific debate” failed to meet the epistemic norms of their profession.Footnote 9 The implication is that there is a divide between competent and incompetent journalism on the topic of vaccines—a divide captured by the journalists’ epistemic relationship to the science at issue.

One fundamental question is thus how to describe the competence that is required of science journalists, if we want to uphold the belief that good coverage of politicized or controversial scientific topics is, in principle, possible. I call this the ECSJ:

ECSJ: Which additional epistemic skills or attributes must a competent journalist possess in order to produce competent science journalism?

I assume that the ECSJ is not a hopeless, senseless, or misguided question—that the epistemological divide between competent and incompetent science journalism can be helpfully described, and that this description can be useful in making sense of cases such as the MMR-vaccine scandal.

The ECSJ is concerned only with the epistemic requirements of science journalism. We can assume, then, that the epistemic requirements of journalism generally—whatever they are, likely related to researching, interviewing, and storytelling—have already been met: the question is what is required to turn a good journalist into a good science journalist. Some people might believe that there is no substantial epistemic difference between journalism and science journalism; their answer to the ECSJ would then be trivial. Others, however, believe that competent generalist journalists are not necessarily fit to be science journalists—and thus, they might coherently criticize a journalist’s coverage of MMR vaccines without condemning their competence as a journalist tout court. These are the kinds of distinctions that can be made using the ECSJ as a guide.

As will become clear, I lean toward a solution to the ECSJ that does not rely on a substantial difference in epistemic requirements between journalism and science journalism. In this sense, I diverge from some of the other answers in the literature.

3. The Knowledge-Based Solution and its Flaws

The beginning of an answer to the ECSJ can be found in recent proposals, touted by some journalism educators and science-communication scholars, to conceive of journalism as a “knowledge-based profession,” in which journalists use their “specialized knowledge” or “expertise” to report on topics such as science (Donsbach, Reference Donsbach2014; Nisbet & Fahy, Reference Nisbet and Fahy2015, Reference Nisbet, Fahy, Jamieson, Kahan and Scheufele2017; Patterson, Reference Patterson2013).

One of the earliest and strongest articulations of the knowledge-based journalism program comes from Patterson (Reference Patterson2013), a communications scholar who focused on the demands of political reporting in the United States. Patterson argued that journalists had gone too far in depending on the testimony of politicians and other authority figures instead of acquiring and using the specialized knowledge that would allow them to report only the truth on the topics in question. He blamed journalists’ “knowledge deficit” (66) for the practice of “he-said/she-said” reporting, according to which journalists covering a controversial topic published quotes from politicians on “either side” of the controversy. In principle, such a practice is meant to allow journalists to inform the public about the disagreement without “taking a side.” But because it involves deference to the loudest voices in an argument without acknowledgment of which is the most reasonable or credible, it also fosters a professional culture in which “‘power’ rather than truthfulness is the operative standard” (52).

Patterson proposed knowledge-based journalism as a revolution in journalism education. He encouraged journalists to gain knowledge on whatever topic they would like to report on in advance, so that they could not be manipulated by the sources whom they subsequently interviewed. Other journalists and scholars followed suit: Donsbach (Reference Donsbach2014) proposed that journalism become the “new knowledge profession”; Stephens (Reference Stephens2014) promoted a new standard of “wisdom journalism.” Over the following years, many journalism education programs began training their students in specialist topics, such as politics, economics, and science.Footnote 10

Although Patterson wrote only briefly about science journalism, he classified it among the specializations that required targeted subject-matter training. Scholars of science communication have picked up on the idea that science journalists’ “knowledge deficit” could explain their past epistemic failures, including the problematic reporting of various controversies (such as the MMR-vaccine case) as “he-said/she-said” affairs.Footnote 11 Indeed, similar ideas have long circulated in science-communication circles. Already in 2005, Mooney and Nisbet had written about problematic “balanced” reporting on the topic of teaching creationism in American schools:

What is a good editor to do about the very real collision between a scientific consensus and a pseudo-scientific movement that opposes the basis of that consensus? At the very least, newspaper editors should think twice about assigning reporters who are fresh to the evolution issue…. As journalism programs across the country systematically review their curriculums and training methods, the evolution “controversy” provides strong evidence in support of the contention that specialization in journalism education can benefit not only public understanding, but also the integrity of the media.

Drawing from Patterson and others, I call this the Knowledge-Based Solution to the ECSJ:

KBS: In order to produce competent science journalism, journalists need to have knowledge of science.

In other words, knowledge of science is all that a competent journalist requires in order to become a competent science journalist.

The KBS is certainly appealing. It is easy to understand its popularity in media scholarship and journalism education circles; few people would resist the idea that science journalism requires some specialist skillset. However, I do not think it is helpful to conceive of that specialization in terms of knowledge of science. As it stands, the formulation is too vague to be of use: What does it mean for a journalist to “know science”? If this still feels like a substantial question, it is because—as long as we allow ourselves to conceive of knowledge as both theoretical and practical—we find ourselves at the same place we were in trying to answer the ECSJ: knowledge of what, exactly, is doing the epistemic work in competent science journalism? The knowledge-based journalism literature is often a bit more precise in its formulations, although not always uniform, and so I propose that the KBS can be faithfully clarified in three different ways:

KBS-(1): Science journalists ought to know science in a practical sense: they ought to be scientists.

KBS-(2): Science journalists ought to know science in a rudimentary sense: they ought to be science-literate.Footnote 12

KBS-(3): Science journalists ought to know science in a specialized sense: they ought to know scientific facts about the subject of their reporting.

In the rest of this section, I will argue against each of these elaborations of the KBS. The norms articulated by KBS-(1), -(2), and -(3) miss their mark: they equip the science journalist with too much of the wrong kind of epistemic good, and not enough of the right kind. Along the way, we will see the outlines of what I take to be a more credible solution to the ECSJ, which is grounded not in knowledge but in a more fine-grained epistemic competence.

Let us begin with the excessive requirements of KBS-(1): to expect that science journalists be scientists is clearly to go too far. After all, science is too vast and specialized a field for anyone to become a competent practitioner in the breadth of subdisciplines that are covered by the typical science journalist. Moreover, journalists are rarely full-fledged members of the communities on which they report: we do not expect sports journalists to be athletes, political commentators to be politicians, or food critics to be chefs. Is there any reason to think that science is so much more impenetrable than these other fields? Some of the best work in science journalism—for example, the work published in Undark, Quanta, and Nautilus magazines—is created by nonscientists.

On the contrary, science journalists’ skills are not in science but in journalism: they are responsible, rigorous communicators of science. As Boyce Resnberger, a veteran science journalist, put the following point in 2014 (quoted in Blum et al., Reference Blum, Hatch and Jackson2020): “Good science journalism stands apart from science.”

A journalist’s job is typically to report on other disciplines: to open a line of communication from the outside of a discipline to the inside and back out again. In sociology, this is described as knowledge-brokering: science journalists are “translators” who “move back and forth between different social worlds” and distribute “knowledge that has been deassembled and reassembled” (Meyer, Reference Meyer2010, 122–123). Although this requires a variety of skills, the skills are arguably not scientific; they are something else entirely.Footnote 13 Science journalists possess specialized skills that go beyond the scientific domain: they are able to assess scientific claims, contextualize them (perhaps alongside nonscientific claims), and communicate them to the public.

In the philosophy of science, a complementary insight comes from the distinction between contributory and interactional expertise (Collins & Evans, Reference Collins and Evans2015). Roughly, contributory expertise is the ability to contribute to some area, while interactional expertise is the ability to discuss it; applied to science, “the core idea is that someone possessing interactive expertise may communicate with scientists in a discipline without being able to contribute to their research” (Gerken, Reference Gerken2022, 25–26). Science journalists are typically described as professionals who possess interactional expertise but lack contributory expertise—and this is not framed as criticism, but rather reflective of the demands of their discipline: they need to communicate with scientists and about science, not conduct scientific research themselves.

KBS-(1) is therefore an unsatisfying solution to the ECSJ, and this much seems to me to be relatively uncontroversial. Most KBS proponents seem to be arguing instead for either KBS-(2) or KBS-(3), or a combination of the two: science journalists ought to be science-literate and/or know specialized facts about the scientific domain on which they are reporting. However, it is difficult to see how declarative knowledge of (either rudimentary or specialized) scientific facts could capture the important epistemic difference between competent and incompetent science journalists.

First, science is not a body of facts—and a proper understanding of science does not result simply from declarative knowledge of scientific truths. To equate the two is to repeat the mistake committed by proponents of the so-called Deficit Model in science communication, which has been extensively criticized.Footnote 14 According to this model, “scientists and other experts possess crucial knowledge that nonscientists lack, and the purpose of science communication is to ‘fill the knowledge gaps’ in a largely one-way flow of information from expert to layperson” (Reincke et al., Reference Reincke, Bredenoord and van Mill2020, 1). As the idea goes, once the public knows the relevant scientific facts, they will be able to perform appropriate inferences and consequently make informed decisions regarding science policy and other applied matters.

The KBS is slightly different from the Deficit Model: it is not targeted at the kind of knowledge that the public needs to understand science, but rather the kind of knowledge that journalists need in order to communicate science. However, both positions are vulnerable to the same kinds of objections. Gerken (Reference Gerken2022) describes the Deficit Model as “an optimistic enlightenment picture… [that is] largely insensitive to the epistemically non-ideal aspects of human psychology” (176). In short: knowledge of scientific facts is not enough to meet the goals of science communication.

Here, again, the literature on scientific expertise is helpful: it is standardly accepted that someone could know every fact about a certain discipline while nevertheless not being an expert in that discipline—in neither an interactional nor a contributory sense.Footnote 15 Instead of knowing every fact that has been previously published about the MMR vaccine, a science journalist needs to know how those facts hang together, which are the most relevant to the topic at hand, which are robust and why, and what kind of evidence it would take to confirm or refute them; they need to know how a study such as Wakefield’s ought to be conducted, and the context in which its results ought to be understood. This requires a kind of epistemic competence that goes beyond the simple propositional knowledge of the subject at hand. In this way, knowledge of scientific facts does not suffice for science journalism.

Furthermore, science journalists do not simply aggregate scientific facts from journal articles or conduct interviews with scientists; they also provide services beyond a scientific purview. They contextualize, explain, discuss, humanize, and criticize scientific findings—and to do so, they engage with a number of sources, only part of which are properly scientific. A science journalist must consequently navigate the epistemic world outside of science just as well as that inside of science; part of what it means to be a good science reporter is knowing how to transition seamlessly between these worlds. Propositional knowledge accounts of the KBS do not account for this broader skillset that is required of the science journalist: in order for them to do their job competently, their epistemic relationship to science must be appropriately contextualized.

KBS-(3) seems to imply that a competent science journalist ought to know the facts regarding the topic they intend to report on in advance of reporting on it. However, this gets the order of events wrong: a journalist’s job is to research—interview experts, uncover and analyze documentary sources, and so forth—in order to gain access to the facts at issue. Instead of expecting that science journalists know the “state of the field” regarding any specific subset of science before their research begins, we expect that they know how to properly uncover it in the course of their research. Consequently, their assessment of the sources and evidence they encounter will not be an assessment based on knowledge of scientific facts, but rather one based on reason, rigor, and other such epistemic virtues. The requirements of KBS-(2) and -(3) go too far.Footnote 16

No propositional knowledge about vaccines and autism, as prescribed by KBS-(3), will help a journalist determine which skeptical questions are appropriate when interviewing Wakefield upon the publication of the 1998 Lancet study. The important thing is knowing how to use one’s knowledge in the appropriate way, and how to corroborate the information one has gathered (including information that one does not yet know to be true). Indeed, the KSJ Science Editing Handbook (Blum et al., Reference Blum, Hatch and Jackson2020) opens with the idea that science journalists “must ask tough questions, analyze information, and demand that extraordinary claims be supported by extraordinary evidence” (7); these tasks do not require knowledge of science, but rather appropriate skepticism toward scientific claims.

Finally, knowledge of science is neither realistic nor useful. Science is a constantly developing affair; when science journalists report on a story, they are often (though not always) engaging in a domain in which the facts may not yet be entirely settled—and the details of which certainly go beyond the basic facts that are used to evaluate scientific literacy. Scientific domains are made up not only of facts but also of conjectures, hypotheses, inferences, theories, and the like. Knowledge, understood as justified true belief, is therefore not an epistemic achievement that science journalists should expect to always reach when reporting on developing stories. Journalists might never know the truth about the possible side effects of vaccination—after all, scientists might not either. Rather, we expect journalists to report on established information while aiming for the truth. We expect the information they publish to be well-grounded in science and attained by rational means.

I have argued that KBS-(1), -(2), and -(3) do not present viable solutions to the ECSJ. Knowledge of science—cashed out as knowledge of how to be a scientist, of rudimentary scientific facts, or of specialized scientific facts—will not provide a satisfactory solution to whatever problem was ailing the incompetent science journalist. Although some kind of scientific knowledge is certainly part of a competent science journalist’s toolbox, this knowledge is derivative of more fundamental epistemic skills and attributes, and it is typically not of the kind envisioned by the KBS literature.

Based on the considerations that have been outlined in this section, the ECSJ will be answered best by an account that takes into consideration more fine-grained epistemic notions, such as responsibility, rigor, and reasonableness. The remainder of this article is dedicated to articulating one such account, grounded in evidentialist epistemic norms.

4. The Confirmation-Based Solution

I argued in Section 3 that science journalists ought to report claims that are well-grounded in science; this task involves communicating about science, asking appropriate questions of experts, and identifying scientific facts and their justification. In summary, we might say that science journalists need to be able to competently navigate the epistemic world of science, including scientists’ operative notions of facts, justification, and the rest. Science journalists thus need to possess whatever epistemic skills or attributes will allow them to do so reliably—which, as we have established, need not be knowledge of science, but rather something that is both more general, to accommodate the broad scope of journalists’ activities, and more precise, to allow for more fine-grained distinctions.

In my opinion, the best approach to answering the ECSJ is by reflecting on the epistemic norms of journalism in general, and then determining what it would take for them to be properly applied in a scientific context. This supports the idea—contra the KBS above—that the epistemic norms of science journalism are no different in kind from those of journalism in general; science journalism is just journalism applied (in part) to the scientific world. (In this sense, I take an appropriate solution to the ECSJ to be concerned with the refinement of journalistic skills, as opposed to the addition of wholly different epistemic goods such as scientific knowledge.) I propose that we answer the ECSJ, then, by turning to practicing journalists.

Kovach and Rosenstiel (Reference Kovach and Rosenstiel2021) have proposed an account of journalism as essentially a “discipline of verification,” in which journalists aim to find not just “the fact” but also the “truth about the fact” (71). Journalistic objectivity, on their account, is grounded in journalists’ checking of the facts before those facts are published. Indeed, empirical research has found that journalists take verification to be the “essential element” of their practice (Shapiro, Brin, Bédard-Brûlé, & Mychajlowycz, Reference Shapiro, Brin, Bédard-Brûlé and Mychajlowycz2013).

In line with these insights, one can argue—as we have in Baker and Fairbank (Reference Baker and Fairbank2022)—that journalism is guided by one fundamental norm: that a journalist should report a piece of information only if that information has been checked. This Confirmation Norm, as we might call it, applies to all kinds of journalism—daily news and longform, print and broadcast, opinion and explanatory. Under it, journalists are bound to check every fact they wish to publish: names, dates, descriptions, quotes from sources and the information included within those quotes, causal explanations, the factual bases of opinions, and more. There must consequently always be two distinct steps to establishing a statement in a journalistic story: first, it is reported, and second, it is “checked.” The first step is exploratory, while the second is more focused: the journalist already has a fact and a set of sources from which it has been reported, and their job is to check that the fact is well supported by the appropriate kind of evidence. I discuss the Confirmation Norm further in Sections 5 and 6.

This norm can be implemented in a variety of ways depending on the resource constraints of the journalist in question. Some (lucky) newsrooms can afford to have an in-house fact-checking department and devote substantial time to the process, while others work on tighter deadlines with fewer staff. The important point is not how checking is implemented, practically speaking, but rather that every piece of information is checked—and this, I take it, is constitutive of the practice.Footnote 17 I thus propose the Confirmation-Based Solution to the ECSJ:

CBS: In order to produce competent science journalism, journalists need to be able to check scientific facts.

In other words, the refinement (in the scientific realm) of a competent journalist’s prior ability to check facts is all that is required for them to become a competent science journalist.

To evaluate this proposal, one first needs to understand what is meant by “checking facts.” Fact-checking is a nuanced term in journalistic circles—it is, as I argue in Section 6, an epistemic concept that extends beyond straightforward epistemological concerns, at least as philosophers have traditionally understood them. Therefore, it is worth examining the practice of fact-checking in more detail before evaluating CBS’s suitability as a solution to the ECSJ.

Editorial fact-checking is unrelated to political or media fact-checking, although the name is similar; it is an independent editorial process in journalism that takes place after a story has been reported and edited and before it is published, and which is usually not visible to the public.Footnote 18 Once a reporter has worked on a story, a fact-checker—either a different journalist or the same original reporter, depending on the structure of the newsroom in question—will go through the story and independently confirm each statement; they call back every person who was interviewed (to ensure the accuracy of the quotes and information attributed to them), go through every document referenced (to ensure that all information has been properly conveyed and attributed), and re-research the factual context of every evaluative claim (to ensure that everything is understood and nothing important overlooked). As they work, the fact-checker notes suggested changes to the piece, ranging from the correction of factual inaccuracies, to the rephrasing of a sentence, to the revision of (sections of) a piece, to the cancellation of its publication. The editorial team makes all necessary modifications to the story before publication—thus ensuring, in principle, that no part of the published work can be objected to “on factual grounds.”Footnote 19

Editorial fact-checking has a long history in professional journalism, and it is considered to be the normative ideal for long-form and investigative journalists (Baker & Fairbank, Reference Baker and Fairbank2022). Therefore, it is a good place to look for an explicit codification of the norm of confirmation that governs journalism in general. As Borel (Reference Borel2018) emphasizes, even if editorial fact-checking is not practiced by every organization, the fact-checking mindset remains: in daily newsrooms, where the nature of breaking news does not allow for extended checking, “stories go through layers of experienced editors, who challenge iffy claims or storylines and, when necessary, send the journalists to do more reporting” (2). Copy editors and lawyers also play the role of fact-checker in small newsrooms, as do specialized research editors.Footnote 20

Editorial fact-checking is not a simple as it might initially sound; journalists are used to thinking about the confirmation of facts in a way that might not line up with standard assumptions. In contemporary journalism, these insights have led to the establishment of nuanced norms of confirmation.Footnote 21

Journalists standardly rely on two kinds of sources: gathered sources (documents, audio and video recordings, online resources, books and articles, etc.) and interviewed sources (people, experts, witnesses, etc.). One of the central notions in play when checking journalistic facts is that of an authoritative source: given a particular statement and a source taken to be evidence for that statement, a journalist needs to determine whether the evidence is sufficient—whether the source in question really does confirm the fact in question, and whether it can be considered “authoritative” in that regard. Authority is understood as fact-relative and gradable: given a certain fact, certain sources are more authoritative for confirming or refuting it, while others are less so or not at all.Footnote 22

The general principle is that every fact in a story should be confirmed with at least one authoritative source or several independent non-authoritative sources. If that is not possible, the fact in question may need to be rephrased or removed, or an explicit acknowledgment of the insufficiency of evidence may need to be added, thereby weakening the strength of the assertion. Journalists also rely heavily on the notion of triangulation, which involves using different sources to corroborate the same fact. As a matter of principle, if several authoritative sources exist to check a fact, the journalist should consult them all and seek to explain any discrepancies among them.

A large part of journalists’ work during the confirmation stage, then, is the assessment of evidence and the search for further evidence. Assessment of evidence, here, has a wide scope: journalists pay attention not only to straightforward accuracy (are the individual facts, as far as we can tell, true?) but also to context and implication (is the audience of this work likely to conclude from it things that are, as far as we can tell, true?) (Blum et al., Reference Blum, Hatch and Jackson2020). Ivan Oransky, founder of Retraction Watch, says that a good fact-checker “digs in” to find what is missing from a story (Borel, Reference Borel2018, 22). They do so by consulting every piece of evidence that ought to be consulted on the topic at hand (they are not restricted to the reporter’s original sources), as well as speaking with relevant experts in order to review the facts and their assessment of those facts. The “expert interview,” in which a journalist calls an expert “on background” to discuss the facts of a story before it is published and make sure no important context is missing, is a foundation of editorial fact-checking.

Further norms govern interviewing practices and the assessment of evidence during fact-checking; I do not discuss these here, although I return to them briefly in Sections 6 and 7. The central point is simply that such norms exist and do guide journalistic practice, and they are sufficient, I think, to capture the benefits of the CBS as an answer to the ECSJ. Editorial fact-checking is not foolproof, but it is something like a “sensitive” method (Becker, Reference Becker2007): if a journalist is minimally competent, fact-checking gives them a way of reliably finding out if and when they have made a factual mistake in their reporting, and they will be able to rectify it. Confirmation increases the chances that a journalist has succeeded in their epistemic goals, and decreases the chance that they have gone far astray.

The CBS has many advantages over the KBS. First, as I have outlined above, it is representative of contemporary science journalism. Borel (Reference Borel2018) writes that “science journalists—like all journalists—should have formal processes to make sure their stories are accurate,” and that fact-checking is the best way to meet this goal (1). Importantly, fact-checkers usually do not know scientific facts in advance; they only know how to appropriately go about corroborating the information they have been given.Footnote 23

Second, the CBS is able to respond to many of the objections that were raised against the KBS: the ability to check scientific facts is, I argue, what makes the difference between competent and incompetent science journalists. Confirmation does not require knowledge: it requires deference to the appropriate sources, and the ability to recognize them as such. In philosophical terms, we can describe the practice of fact-checking as a journalist’s search for the proper justification of the facts they aim to publish—and so part of the skills required by the CBS are those of identifying scientific facts and their justification (cf. Gerken, Reference Gerken2022). However, journalists also need to place those findings in their appropriate contexts, so the types of evidence a journalist consults will not always be solely scientific. This explains the broader skillset that is required of the science journalist, as outlined in Section 3.

Of course, there are limits to the usefulness of confirmation: the fact that one’s body of evidence strongly supports a conclusion does not entail that the conclusion is true. The CBS allows for the possibility that good science journalists will sometimes report false things. I take this to be a virtue of the account: as I outlined in Section 3, given the nature of scientific research, it would be unreasonable to demand that science journalists report only scientific truths. Even the most competent science journalists are restricted by the limits of current scientific research.

Finally, despite its divergence from the KBS, the CBS is able to respond to the worries that motivated such a solution. Recall that Patterson (Reference Patterson2013) and other proponents of knowledge-based journalism criticized journalists’ reliance on testimony because it represented a significant vulnerability—an overreliance on officials or other figures of authority when gathering facts, which led to problematic “he-said/she-said” reporting. Of course, the solution is not for journalists to stop relying on testimony; that would prevent journalists from doing their jobs at all.Footnote 24 Instead, Patterson asks that journalists interview their sources from a “position of strength” (76). The CBS, I think, provides a way to understand this requirement without turning to knowledge: journalists are bound to check the facts that they have gathered during an interview, to triangulate the resulting testimonial evidence with other evidence, and to find authoritative sources for the fact in question.Footnote 25

Let us return to the MMR-vaccine case. Fact-checking minimizes the chance that science journalists will report scientific inaccuracies, such as the claim that MMR-vaccines cause autism; and, perhaps more importantly, it also minimizes the chance that journalists will imply scientific inaccuracies in their work, whether consciously or not. This is because of the broad scope of fact-checking, which examines facts and their accuracy in context as opposed to in isolation. Katie Palmer, a science editor at Quartz, says that “fact-checking is more than checking facts: It is also about checking assumptions” (Blum et al., Reference Blum, Hatch and Jackson2020, 152).

Norms for science fact-checking show how this is done in practice (Baker & Fairbank, Reference Baker and Fairbank2022; Borel, Reference Borel2018). If a journalist chose to report on Wakefield and his colleagues’ paper from 1998, they would be bound to assess the scientific evidence—and, on this basis, they would be bound to dismiss Wakefield’s claims, or at the very least, severely dampen them. (A similar point is made by Gerken (Reference Gerken2022), who writes that “there is extremely strong and growing scientific justification that there is no correlation between MMR vaccines and autism” (142).) They would also be bound to triangulate anti-vaccine activists’ claims with the evidence provided by other sources, thus placing them in their proper context. In addition, they would be bound to assess the authority of the various sources concerned: parents of unvaccinated children, for example, would be considered authoritative on certain facts regarding their experiences, but not on the scientific facts. These tools are enough to explain how the CBS prevents “false balance” in reporting on MMR vaccines. The failure of journalists who reported on the MMR-vaccine case with false balance was a failure to properly fact-check.Footnote 26

More complicated cases of science journalism can also be addressed by the CBS. As I have emphasized throughout this article, being a good science journalist does not mean taking only scientific sources seriously; it means knowing how to check scientific facts and knowing how to place them in their proper context, alongside sources that may be nonscientific but nevertheless relevant to the story at hand. (These are skills that are particular to the science journalist, and hence are under the remit of the ECSJ, but they involve experience beyond the scientific realm.) The CBS should capture why, in the MMR-vaccine case, where the science is settled, a journalist ought not publish work that falsely communicates the science in question. However, it should also capture why, in other contexts where the scientific consensus is uncertain, a journalist ought to take seriously the testimony of people with relevant experience. “Long COVID” (Yong, Reference Yong2023) is emerging as one contemporary example of this: in the context of a fluctuating information landscape, as well as contradictions between patient testimonies and medical records, the CBS provides a way forward for competent science journalists who are interested in reporting on the illness. Instead of trying to determine what the scientific facts are on their own, these journalists need to gather testimony from relevant parties, place them in their proper context, and triangulate. The results of their reporting may shift as the science evolves.

In keeping with the CBS, discussions among science journalists about how to report responsibly and with rigor on issues such as long COVID often boil down to discussions about the proper identification of evidence and authority. In Body Politic’s Comprehensive Guide to Covering Long COVID, for example, there is discussion of the fact that positive PCR tests are often not sufficient for diagnosing long COVID, and hence, such medical evidence must be properly contextualized (Lowenstein, Reference Lowenstein2021). The guide also emphasizes the importance of “talking to patients and considering them to be experts on their lived experience,” while remembering that patients may still not be experts on what is causing their lived experience. “Similarly, while clinicians or researchers may be able to provide theories behind a patient’s experience, if they have not lived with the illness, these experts are not always helpful sources to speak to the lived experience.” No source is de facto more authoritative than another; it all depends on the fact that one is trying to confirm.

The assumption is that journalists do not have specialized knowledge of long COVID; Body Politic’s guide provides instructions for reporting on a topic about which there is little common knowledge, and about which journalists are learning while they report. The important thing is that they are doing so in an established, reasonable, and rigorous manner, as the CBS demands.

Ed Yong, for example, an established science reporter, has written about the dangers of relying exclusively on published scientific papers when reporting on long COVID:

As energy-depleting illnesses that disproportionately affect women, long COVID and M.E./C.F.S. are easily belittled by a sexist society that trivializes women’s pain, and a capitalist one that values people according to their productivity. Societal dismissal leads to scientific neglect, and a lack of research becomes fodder for further skepticism. (Yong, Reference Yong2023).

A competent science journalist, then, knows not only how to confirm scientific facts, but also how to identify situations when the scientific facts are insufficient or unconvincing, and when nonscientific evidence may nevertheless be of relevance to a scientific topic. It is because of the amenability of the CBS to such complex situations that I think it provides a convincing solution to the ECSJ.

In what follows, I will address two possible concerns for the CBS: one stemming from journalists’ and others’ recent criticisms of journalistic objectivity, and the other stemming from philosophers’ conceptions of epistemic norms for assertion.

5. Journalism’s “Objectivity Wars”

The CBS is based on the idea that journalism is governed by the Confirmation Norm:

Confirmation Norm: A journalist should report a piece of information only if that information has been checked.

Because of this, some journalists might worry that the CBS is wedded to a problematic notion of objectivity in journalism and thus unconvincing: even if practicing journalists feel bound by the Confirmation Norm, these critics might say, that does not mean they ought to. Since journalistic objectivity is a flawed concept that ought to be discarded, the CBS is of no use; it relies on too naïve a picture of journalism, and too naïve a separation of epistemic and ethical matters within the discipline. This kind of concern matters, because a solution to the ECSJ should provide epistemological guidance for actual science journalists.

In response, I want to defend the CBS by looking at why some journalists have abandoned the notion of objectivity, and arguing that the Confirmation Norm is well suited to answer their concerns. There is good reason to think that journalism is governed by an epistemic norm—whether we choose to describe it as objectivity is a further issue—and something like the Confirmation Norm is well suited to capture it.

Journalistic objectivity has long been thought to distinguish the profession as an epistemically rigorous enterprise; in the twentieth Century, professional journalists’ claim to objectivity was taken as a justification for their considerable influence in public discourse (Anderson & Schudson, Reference Anderson, Schudson, Wahl-Jorgensen and Hanzitsch2019). In recent years, however, there have been substantial disagreements over how the objectivity ideal should be understood, what purpose it serves, what harms it causes, and whether it should be preserved in any form.Footnote 27

One fruitful way of understanding the Objectivity Wars in journalism is as a debate about what the standard of assessment for objectivity ought to be. Here are the main contenders, as I see them:

Descriptive: A journalist is objective if they describe the world “as it really is,” free from social/personal biases.

Perspectival: A journalist is objective if they “remain impersonal” and find “balance” between conflicting views and evidence.

Procedural: A journalist is objective if they are maximally rigorous and exhaustive in establishing facts.

Few people in the objectivity debate disagree that these three elements (aiming for truth, assessing evidence, and establishing facts rigorously) are important journalistic ideals; indeed, these characterizations of objectivity are not necessarily in conflict with one another. The debate concerns which one(s), if any, should be taken as foundational and thereby accorded explanatory priority, and whether any such standard is even desirable or attainable.

Over the past 150 years, there has been a notable turn away from the descriptive and perspectival accounts of objectivity, and toward the procedural account (Ward, Reference Ward2015; Kovach & Rosenstiel, Reference Kovach and Rosenstiel2021; Schudson, 1981). Alongside the professionalization of the discipline, two important developments have encouraged this progression: the rise of journalism studies among sociologists, anthropologists, and ethnographers, which has led to an internal critique of journalists’ self-understanding (e.g., Tuchman, Reference Tuchman1972); and the criticism of mainstream journalism by members of marginalized communities, which has led to a reevaluation of the very concept of objectivity. I will focus on the second of these developments here.

Contemporary critics have complained that the descriptive account of objectivity is unrealistic and misguided, and that in practice, it had served to prevent certain people from participating in the profession, since they have been considered incapable of reaching the mythical “objective realm of facts” in which journalists are expected to operate. “What sounds like fact and news in newsrooms”—that is, what is considered objective—“is more likely related to what and who is considered to be rational, able to report, and/or distanced enough” (Callison & Young, Reference Callison and Young2020, 36). In a now famous essay for the New York Times, journalist Wesley Lowery stated the following:

Since American journalism’s pivot many decades ago from an openly partisan press to a model of professed objectivity, the mainstream has allowed what it considers objective truth to be decided almost exclusively by white reporters and their mostly white bosses.… The views and inclinations of whiteness are accepted as the objective neutral. (Lowery, Reference Lowery2020)

Critics have similarly exposed important flaws in the perspectival account of objectivity, arguing that journalists who operate with a conception of objectivity as “a view from nowhere” often end up treating the testimonies of marginalized communities as not credible, thereby excluding their voices entirely from journalists’ assessment and adjudication of the evidence. In this vein, journalist Pacinthe Mattar has complained that “there’s an added burden of proof, for both journalists and sources, that accompanies stories about racism.….How can the media be trusted to report on what Black and other racialized people are facing when it doesn’t even believe them?”

The criticism of objectivity is that it hides moral failure under the cloak of epistemology: it makes bias look like neutrality. What is touted as epistemic authority is really just an application of power. If we take these criticisms seriously, as I think we should, the remaining options are to conclude either that objectivity is not a reasonable ideal for journalism in the first place, or that the third, procedural account of objectivity is better able to respond to critics’ reasonable complaints. Those journalists in the latter camp have underlined the distinction between objectivity as a standard for journalists’ epistemic methods, which they endorse, and objectivity as a standard for journalists’ moral behavior or beliefs, which they do not. Kovach and Rosentiel (Reference Kovach and Rosenstiel2021) accordingly write that:

Objectivity was not meant to suggest that journalists were without bias. To the contrary, precisely because journalists could never be objective, their methods had to be. In the recognition that everyone is biased, in other words, the news, like science, should flow from a process for reporting that is defensible, rigorous, and transparent. (xxviii)

I have defended a similar account of objectivity grounded in contemporary practices of fact-checking, which consider the complications of fact-checking stories about lived experience in the context of structural injustice and marginalization (Baker & Fairbank, Reference Baker and Fairbank2022; Fairbank, Reference Fairbank2021).

The idea is that criticisms of journalism arising from the Objectivity Wars are legitimate, but they are not fatal to journalistic objectivity in principle; rather, they arise from the fact that (some) journalists and (some of) their audience have largely misunderstood journalists’ epistemic obligations and operative concepts. Adhering to a procedural version of objectivity, according to which “journalists aim to be maximally rigorous and exhaustive in establishing facts,” does not prevent us from acknowledging the kinds of considerations brought up by critics; to the contrary, it gives us a basis from which to criticize those journalists who did not appropriately live up to the norms in question. In short: if a journalist unfairly dismisses a source as irrelevant or untrustworthy, they are failing to live up to the epistemic standards of their discipline, in addition to further ethical ones. Indeed, Lowery’s complaints about objectivity seem to be in line with this sentiment, since he calls for reporters to focus instead on “being fair and telling the truth, as best as one can, based on the given context and available facts.” This is an epistemic demand.

Of course, it may well be that journalistic objectivity is by now too polluted of a concept to be of use, and that it is best abandoned; clearly, the concept has been misused to justify exclusion and bias. I am tempted to maintain the ideal while clarifying the norms that underlie it, but nothing hangs on this terminological issue. Although objectivity in journalism is controversial, the importance of epistemic norms and verification is not.

The practice of editorial fact-checking shows how complex the relationship is between epistemic and other norms in journalism. For example, it is expected that whenever appropriate and within reason, a journalist will give people a choice over how their stories will be reported and fact-checked, but not whether the relevant facts will be checked. (This is called the Collaboration Principle for fact-checking in Baker and Fairbank (Reference Baker and Fairbank2022).) This is a way of ensuring that the relationship between journalists and their sources and audience is one of respect—and this, in turn, ensures that journalists’ reporting is epistemically responsible. Interviewing sources in a trauma-informed manner, too, is understood as a way of being rigorous and of aiming for accuracy: journalists are more likely to get to the truth if their sources trust them (Baker & Fairbank, Reference Baker and Fairbank2022). In this sense, regarding journalistic practice, ethics and epistemology are not in tension, but rather go hand in hand. This is an insight that pairs well with procedural accounts of journalistic objectivity.

Critics of journalistic objectivity have rightly emphasized the discipline’s failure to acknowledge the voices of marginalized communities and to treat their testimonies as credible. This is not a reason to abandon the priority of epistemic norms in journalism, or to reject the CBS; rather, it underlines the fact that questions of evidence are intricately related to questions of equity—including whether some voices are treated as more reliable than others.

6. Epistemic Norms of Journalism Assertion

In this final section, I consider a further concern that epistemologists may raise: that the Confirmation Norm is not properly epistemic at all. Indeed, as I have emphasized in Section 5, the Confirmation Norm seems to incorporate ethical and practical considerations, such as the Collaboration Principle, that go beyond the scope of “pure” epistemology. I argue that this is not a problem for the account, but rather an interesting way of showing how applied contexts differ from the idealized contexts in which certain epistemological conversations occur.

It might be said that journalism is a kind of assertion, and if this is the case, then it ought to be governed by whatever epistemic norm governs assertion. Indeed, Simion (Reference Simion2017) has provided a convincing account of the normative framework of news publishing: when a journalist reports an item of news, they are performing an informative speech act, which is a species of asserting that p with the necessary characteristics that “S reports that p only if (1) S asserts that p for at least one hearer H, (2) H uptakes S’s assertion, and (3) the purpose of S asserting that p is to inform H that p” (415). News reporting is therefore always a public act of assertion intended to inform the audience. Simion argues that understanding journalism intuitively allows for the identification of epistemic and prudential norms for news reporting.Footnote 28

Epistemologists of journalism can adopt Simion’s framework but take one step back and ask: What is the relevant norm of assertion for journalism? The Confirmation Norm is credibly one way of answering these questions: what matters is that, somehow, journalists check their facts before publication. However, the Confirmation Norm is not in line with the standard answers in epistemology—the most popular of which are a truth norm, a knowledge norm, a warrant norm, and a belief norm (Brown & Cappelen, Reference Brown and Cappelen2011). How does the Confirmation Norm fit, if at all?

We should, I think, accommodate the intuition that journalists’ activities are to be distinguished from our everyday epistemic interactions. It is commonly acknowledged, for example, that reading something in a published work of journalism is of greater evidential importance than overhearing the same thing being discussed among friends over dinner. We assume that a journalist has an individual ethical responsibility to believe the stories of victims of sexual violence but a professional responsibility to fact-check those stories before publishing them. The Confirmation Norm for journalistic assertion might thus be taken as evidence that the norms for assertion vary with context, as has been argued by Brown (Reference Brown2010), Gerken (Reference Gerken2012), and Goldberg (Reference Goldberg2015), among others. Another option might be to understand the Confirmation Norm as a strengthened version of a warrant norm, such as Lackey’s (Reference Lackey2007) Reasonable to Believe norm.

An epistemologist might complain, however, that the last two sections of this article have only emphasized how much baggage is packed into the journalistic notion of confirmation—much more than what an epistemologist would traditionally support. Are the requirements to collaborate with one’s sources or to be trauma-informed during fact-checking, as outlined in Section 5, really epistemic requirements? My response is that, although such principles seem to go beyond “pure” epistemology, they are epistemologically justified: they are conducive to the attainment of verified (i.e., objective, journalistic) facts—indeed, I think, they are required.

This highlights the importance of not over-idealizing epistemology (cf. Kukla, Reference Kukla and Lackey2021). Responsible journalism must consider how our assessments of authority and expertise can be tainted by bias and systemic injustice: our epistemic activities are inevitably influenced by non-epistemic factors. The solution is to understand confirmation as a social epistemic affair, which involves the triangulation of sources and deference to those who know best. In this way, the Confirmation Norm makes room for nonideal theorizing about the actual practices of journalists, and science journalists in particular.

I take this nonideal perspective on journalism to be a virtue of my account, as well as incentive for further work in the epistemology of journalism. The notions of rigor and responsibility that are fundamental to editorial fact-checking can arguably never be described in “purely” epistemic terms once they are placed in a nonidealized context—as, in journalistic practice, they always are. This is not to say that there is no distinction between epistemic and ethical or other norms in journalism; to the contrary, the fact that there is such a distinction is what motivates my defense of the CBS. Rather, journalists tend to carve up the “epistemic territory” in an interesting way that diverges from how it is done in abstract epistemology.

7. Conclusion

I hope that this article serves as an example of the kind of interdisciplinary work that can be conducted in the philosophy of journalism. I have argued that the Knowledge-Based Solution to the ECSJ is unsatisfactory, and that the Confirmation-Based Solution, according to which science journalists are governed by a norm of confirmation, shows the way forward. Thinking about the norms of science journalism had led us to reflect on the norms of journalism more generally, and to see how, in applied cases, epistemology is a nuanced affair.

Acknowledgments

Every point about fact-checking made here is indebted to past conversations and collaborations with Allison Baker. Sandy Goldberg and Susanna Siegel helped the author think through early versions of the ideas presented in this article. The author would also like to thank Jessica Brown for her remarkably helpful comments on a draft of this article, as well as three anonymous reviewers for their useful questions and suggestions.

Viviane Fairbank is a PhD student at the St Andrews and Stirling Graduate Programme in Philosophy. Her research interests include the philosophy of logic, feminist philosophy, and epistemology. She has previously worked as a journalist.

Footnotes

1 “As any reporter should know, a ‘study’ of a few children proves nothing, and a study of a handful of children that has found no confirmation and has indeed been roundly and extensively rebutted might not deserve yet another hearing for its author in the press” (Raeburn, Reference Raeburn2011).

2 In the journalism literature, verification is the standard word for what I call confirmation; I have changed the terminology to avoid confusion with “verificationist” positions in philosophy. Note that my use of confirmation is also not intended to relate to “confirmation theory” in epistemology.

3 Details of the study’s shortcomings, including the methodological and ethical flaws that led to its retraction by The Lancet, can be found in Rao et al. (2011) and Mnookin (Reference Mnookin2011).

4 Deer reported on health and medicine for the magazine, and his coverage of the issue began with a focus on Wakefield’s undisclosed conflicts of interest; as he continued to report on the story over several years, he underlined the importance of approaching Wakefield’s scientific claims with appropriate skepticism and taking the time to speak with other scientists as well as participants in the 1998 study. Deer has written about the thoroughness with which his reporting on the issue was vetted—including the use of 16,000 documents and recordings, which were checked by editors, producers, peer reviewers, lawyers, and eventually a five-member panel of the UK General Medical Council (Deer, Reference Deer2024).

5 Note that my focus here is not on those actors who intended to misrepresent the facts regarding MMR vaccines, but rather on those journalists who aimed to communicate the facts faithfully and failed.

6 By focusing on epistemic responsibilities, I am setting aside ethical, political, and other considerations. These certainly also play a part in the justified criticism of science journalists who reported on the purported vaccine–autism link, but they are not the focus of this paper. I am consequently not addressing people who think that science journalism ought primarily to humanize science, promote environmentalism, suggest policy options, or do anything else of the sort (e.g., Hertsgaard & Pope, Reference Hertsgaard and Pope2021; Nisbet & Fahy, Reference Nisbet and Fahy2015).

7 False balance in journalism has been analyzed at length by journalists, media scholars, and philosophers (see, e.g., Simion, Reference Simion2017; Gerken, Reference Gerken2022). In this article, because I assume most journalists intended to inform the public and did not realize their error, I understand false balance as symptomatic of an epistemic deficiency. This lines up with how Dixon and Clarke (Reference Dixon and Clarke2013), for example, explain the problem: balance in journalism is irresponsible—it becomes false balance—when journalists “fail to scrutinize details, avoid errors, and ensure that the perspective with the most supporting scientific evidence is conveyed,” and when they fail to add “context about where strength of evidence lies.” These journalistic failures can lead to the suggestion that “well-established science is being openly debated within the scientific community when, in fact, that is not the case.” (361)

8 Goldacre (Reference Goldacre2008): “The blame lies… with the hundreds of journalists, columnists, editors, and executives who drove this story cynically, irrationally, and willfully onto the front pages for nine solid years…. They overextrapolated from one study into absurdity, while studiously ignoring all reassuring data, and all subsequent refutations” (238).

9 See, for example, the KSJ Science Editing Handbook (Blum et al., Reference Blum, Hatch and Jackson2020).

10 These educational initiatives tended to adopt Patterson’s (Reference Patterson2013) distinction between content knowledge, which is knowledge of a subject, and process knowledge, which is knowledge of how reporting methods affect news content and its impact. Both types of knowledge are meant to be at the foundation of knowledge-based journalism, and both are taught to students in some programs. In this article, I focus only on the content knowledge—or subject-matter expertise (Donsbach, Reference Donsbach2014)—side of things, because it is of direct relevance to the ECSJ.

11 In philosophy, too, Figdor (Reference Figdor2017) has similarly criticized the “epistemic helplessness” (3) of science journalists who have no scientific knowledge.

12 I follow Kahan et al. (Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and Mandel2012) in conceptualizing the knowledge involved in science literacy as propositional knowledge of rudimentary scientific facts. The authors define science literacy according to the NSF’s Science and Engineering Indicators, which “are widely used as an index of public comprehension of basic science,” and which test subjects’ knowledge of basic statements such as “Antibiotics kill viruses as well as bacteria [true/false]” (735). This places KBS-(2) squarely in between the kind of knowledge required by KBS-(1)—practical knowledge of science, including how science works—and the kind required by KBS-(3)—specialized knowledge about a particular domain of science.

Some people might instead conceive of the knowledge involved in science literacy as practical—perhaps a kind of actionable understanding of how science works, or how to identify scientific facts and their justification. In this case, science literacy may well be closer to the epistemic requirements of science journalism. However, this would be a minority account of science literacy (see Norris, Phillips, & Burns, Reference Norris, Phillips, Burns and Matthews2014), and it is not the kind of knowledge trumpeted by the KBS; therefore, it is not my target here.

13 In a survey of 249 environmental journalists, 14% of which had majored in environmental studies, Wilson (Reference Wilson2000) found that reporting experience correlated more highly with accurate reporting on climate change than any educational background. “Overall, percentage of time spent on environmental reporting and use of scientists as sources were the most consistent, statistically significant predictors of climate change knowledge” (7).

14 See, however, Bauer (Reference Bauer2016): “Despite 20-plus years of polemics and positioning against the deficit concept, it seems that this concept has an unusual staying power” (398) (see also Trench and Junker, Reference Trench and Junker2001).

15 “Consider, for example, an amateur historian who has memorized most of the publicly available information about an era but lacks the ability to synthesize the information or provide historical analysis of it. Or consider a butterfly connoisseur who has memorized a fantastic amount of butterfly trivia but completely lacks an understanding of the basics of biology. Such individuals appear to lack an important aspect of scientific expertise” (Gerken, Reference Gerken2022, 23).

16 This does not mean that a competent science journalist knows nothing, or ought to know nothing. Of course, it is not problematic for a science journalist to be knowledgeable about science. I am arguing only that the knowledge requirements articulated by KBS-(1), -(2), and -(3) are not appropriate as responses to the ECSJ.

17 Practical considerations, such as budget and time constraints, can substantially affect how fact-checking is carried out. Despite financial setbacks throughout the industry, however, there is general agreement among journalists that fact-checking norms should be maintained. In cases where sacrifices to fact-checking must be made—for example, when working on a developing news story under a tight deadline, when the threshold of verification might have to be lowered—journalists and fact-checkers prioritize transparency of sourcing and methodology, so that their audiences are aware of the compromises they have made. Further discussion of such constraints can be found in Baker and Fairbank (Reference Baker and Fairbank2022).

18 Political or media fact-checking outlets include Politifact, FactCheck.org, and Snopes. Editorial fact-checking, on the other hand, is the kind of prepublication checking done by The New Yorker, Smithsonian, Der Spiegel, and others.

19 For further details regarding the fact-checking methodology, see Baker and Fairbank (Reference Baker and Fairbank2022) and Borel (Reference Borel2018).

20 FiveThirtyEight, for example, a publication that specializes in data journalism, has a “quantitative editor who works on an as-needed basis to double-check research findings” (Borel, Reference Borel2018, 11).

21 It is worth noting that I have previously worked as an editorial fact-checker, and for several years, I was the head of research at a monthly magazine. This experience has shaped many of my thoughts on these issues, including in Fairbank (Reference Fairbank2021) and Baker and Fairbank (Reference Baker and Fairbank2022).

22 Epistemologists may think of authority in reliabilist terms, though for the sake of space, I will not elaborate on this notion here.

23 Borel notes that most science fact-checkers (87%) do not have a science degree (3). I take this to be another point against the KBS: assessing scientific evidence does not seem to require a specialized science education, if one is a competent journalist.

24 “The testimony of human sources [has] always contributed the bulk of published news” (Godler et al., 2020, 216).

25 This conclusion is not in tension with the idea that journalists have or ought to have some kind of knowledge about science. Fact-checkers can make use of scientific knowledge—for example, knowledge of scientific publication norms, funding practices, and statistical methods (which is not knowledge of the kind demanded by the KBS). Science journalists might also happen to know plenty of scientific facts; they will nevertheless be obliged to check them before publishing them.

26 Of course, the Confirmation Norm does not prevent journalists from reporting on the MMR-vaccine case at all. It only stipulates that, if they do report on it, they must confirm the facts they report—and this severely restricts the claims that they can publish about it. I think this is the right result, but it means that some arguably epistemic criticisms of science journalists (about their choices of what to cover in the first place) will not fall under the scope of the CBS.

27 See, for example, Mattar (Reference Mattar2020), Lowery (Reference Lowery2020), and Wallace (Reference Wallace2019), as well as this recent event hosted by the Columbia Journalism Review: https://www.cjr.org/analysis/objectivity-wars-event-media-trust.php.

28 Here, I understand news as any piece of information that a journalist might be interested in publishing. Interpreting the word in this way allows me to bring together various kinds of journalism—not only daily news but also investigative reporting, long-form productions, explanatory writing, and others, all of which I take to be bound by the Confirmation Norm.

References

Allsop, J. (2020). The Lancet’s cutting edge. Columbia Journalism Review, October 8. https://www.cjr.org/special_report/the-lancet-covid-19-medical-studies-politics.phpGoogle Scholar
Anderson, C. W., & Schudson, M. (2019). Objectivity, professionalism, and truth seeking. In Wahl-Jorgensen, K. & Hanzitsch, T. (Eds.), The handbook of journalism studies. Lawrence Erlbaum.Google Scholar
Baker, A., & Fairbank, V. (2022). The truth in journalism fact-checking guide. https://thetijproject.ca/guideGoogle Scholar
Bauer, M. W. (2016). Results of the essay competition on the ‘deficit concept’. Public Understanding of Science, 25(4), 398399.CrossRefGoogle ScholarPubMed
Becker, K. (2007). Epistemology modalized. Routledge.Google Scholar
Blum, D., Hatch, J., & Jackson, N. (2020). KSJ science editing handbook. https://ksjhandbook.org/wp-content/uploads/sites/5/2022/07/ksj-handbook-v2.1.pdfGoogle Scholar
Brainard, C. (2013). Sticking with the truth. Columbia Journalism Review. https://www.cjr.org/feature/sticking_with_the_truth.phpGoogle Scholar
Brown, J. (2010). Knowledge and assertion. Philosophy and Phenomenological Research, 81, 549566.CrossRefGoogle Scholar
Brown, J., & Cappelen, H. (2011). Assertion: New philosophical essays. Oxford University Press.CrossRefGoogle Scholar
Callison, C., & Young, M. L. (2020). Reckoning: Journalism’s limits and possibilities. Oxford University Press.CrossRefGoogle Scholar
Collins, H., & Evans, R. (2015). Expertise revisited, part I: Interactional expertise. Studies in History and Philosophy of Science Part A, 54, 113123.CrossRefGoogle ScholarPubMed
Deer, B. (2024). Checking ‘The Doctor Who Fooled the World’ is true. https://briandeer.com/fact-checking-investigation.htmGoogle Scholar
Dixon, G. N., & Clarke, C. E. (2013). Heightening uncertainty around certain science: Media coverage, false balance, and the autism-vaccine controversy. Science Communication, 35(3), 358382.CrossRefGoogle Scholar
Donsbach, W. (2014). Journalism as the new knowledge profession and the consequences for journalism education. Journalism, 15(6), 661677.CrossRefGoogle Scholar
Fairbank, V. (2021). How do we exit the post-truth era? The Walrus, April 7. https://thewalrus.ca/how-do-we-exit-the-post-truth-era/Google Scholar
Figdor, C. (2017). (When) is scientific reporting ethical? The case for recognizing shared epistemic responsibility in science journalism. Frontiers in Communication, 2, 17.CrossRefGoogle Scholar
Gerken, M. (2022). Scientific testimony. Oxford University Press.CrossRefGoogle Scholar
Gerken, M. (2012). Discursive justification and skepticism. Synthese, 189, 373394.CrossRefGoogle Scholar
Godlee, F. (2011). The fraud behind the MMR scare. British Medical Journal, 342, d22.CrossRefGoogle Scholar
Goldacre, B. (2008). Bad science. Fourth Estate.Google Scholar
Goldberg, S. C. (2015). Assertion: The philosophical significance of a speech act. Oxford University Press.Google Scholar
Hertsgaard, M., & Pope, K. (2021). Climate journalism is coming of age. Columbia Journalism Review, October 6. https://www.cjr.org/covering_climate_now/climate-journalism-awards.phpGoogle Scholar
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732735.CrossRefGoogle Scholar
Kovach, B., & Rosenstiel, T. (2021). The elements of journalism: What newspeople should know and the public should expect. 4th Edn. Three Rivers Press.Google Scholar
Kukla, Q. R. (2021). Situated knowledge, purity, and moral panic. In Lackey, J. (Ed.), Applied epistemology. Oxford: Oxford University Press.Google Scholar
Lackey, J. (2021). Applied epistemology. Oxford University Press.CrossRefGoogle Scholar
Lackey, J. (2007). Norms of assertion. Noûs, 41(4), 594626.CrossRefGoogle Scholar
Lewis, J., & Speers, T. (2003). Misleading media reporting? The MMR story. Nature Reviews Immunology, 3(11), 913918.CrossRefGoogle ScholarPubMed
Li, N., Stroud, N. J., & Jamieson, K. H. (2017). Overcoming false causal attribution: Debunking the MMR–autism association. In Jamieson, K. H., Kahan, D. M., & Scheufele, D. A. (Eds.), The Oxford handbook of the science of science communication. Oxford University Press.Google Scholar
Lowenstein, F. (2021). Body politic’s comprehensive guide to covering long COVID. https://www.wearebodypolitic.com/bodytype/2021/7/6/long-covid-guideGoogle Scholar
Lowery, W. (2020). A reckoning over objectivity, led by black journalists. The New York Times, June 23. https://www.nytimes.com/2020/06/23/opinion/objectivity-black-journalists-coronavirus.htmlGoogle Scholar
Mattar, P. (2020). Objectivity is a privilege afforded to white journalists. The Walrus, August 21. https://thewalrus.ca/objectivity-is-a-privilege-afforded-to-white-journalists/Google Scholar
Meyer, M. (2010). The rise of the knowledge broker. Science Communication, 32(1), 118127.CrossRefGoogle Scholar
Mnookin, S. (2011). The panic virus. Simon and Schuster.Google Scholar
Nisbet, M. C., & Fahy, D. (2017). New models of knowledge-based journalism. In Jamieson, K. H., Kahan, D. M., & Scheufele, D. A. (Eds.), The oxford handbook of the science of science communication. Oxford University Press.Google Scholar
Nisbet, M. C., & Fahy, D. (2015). The need for knowledge based journalism in politicized science debates. The Annals of the American Academy of Political and Social Science, 658, 223234.CrossRefGoogle Scholar
Norris, S. P., Phillips, L. M., & Burns, D. (2014). Conceptions of scientific literacy: Identifying and evaluating their programmatic elements. In Matthews, M. R. (Ed.), International handbook of research in history, philosophy and science teaching. Springer.Google Scholar
Offit, P. A. (2007). Thimerosal and vaccines—a cautionary tale. The New England Journal of Medicine, 357(13), 12781279.CrossRefGoogle ScholarPubMed
Patterson, T. E. (2013) Informing the news: The need for knowledge-based reporting. Random House.Google Scholar
Raeburn, P. (2011). NYT Magazine: Autism, vaccines, and a story that should not have been written. Knight Science Journalism @ MIT, April 25. https://ksj.mit.edu/tracker-archive/nyt-magazine-autism-vaccines-and-story-s/Google Scholar
Rao, T. S. S., & Andrade, C. (2011). The MMR vaccine and autism: Sensation, refutation, retraction, and fraud. Indian Journal of Psychiatry, 53 (2), 9596.Google ScholarPubMed
Reincke, C. M., Bredenoord, A. L., & van Mill, M. H. W. (2020). From deficit to dialogue in science communication. EMBO Reports, 21, e51278.CrossRefGoogle ScholarPubMed
Rensberger, B. (2000). The nature of evidence. Science, 289(5476), 61.CrossRefGoogle ScholarPubMed
Schulman, D. (2005). Drug test. Columbia Journalism Review, November/December.Google Scholar
Shapiro, I., Brin, C., Bédard-Brûlé, I., & Mychajlowycz, K. (2013). Verification as a strategic ritual. Journalism Practice, 7(6), 657673.CrossRefGoogle Scholar
Simion, M. (2017). Epistemic norms and ‘He Said/She Said’ reporting. Episteme, 14(4), 413422.CrossRefGoogle Scholar
Stephens, M. (2014). Beyond news: The future of journalism. Columbia University Press.Google Scholar
Trench, B., & Junker, K. (2001). How scientists view their public communication. In 6th international conference on PCST, Geneva, Switzerland, 89103.Google Scholar
Tuchman, G. (1972). Objectivity as strategic ritual: An examination of newsmen’s notions of objectivity. American Journal of Sociology, 77(4), 639819.CrossRefGoogle Scholar
Wallace, L. R. (2019). The view from somewhere: Undoing the myth of journalistic objectivity. University of Chicago Press.CrossRefGoogle Scholar
Ward, S. J. A. (2015). The invention of journalism ethics: The path to objectivity and beyond. McGill-Queen’s University Press.CrossRefGoogle Scholar
Wilson, K. M. (2000). Drought, debate, and uncertainty: Measuring reporters’ knowledge and ignorance about climate change. Public Understanding of Science, 9(1), 113.CrossRefGoogle Scholar
Yong, , Ed. (2023). Reporting on long covid taught me to be a better journalist. The New York Times, December 11. https://www.nytimes.com/2023/12/11/opinion/long-covid-reporting-lessons.htmlGoogle Scholar