Hostname: page-component-78c5997874-lj6df Total loading time: 0 Render date: 2024-11-13T05:03:30.207Z Has data issue: false hasContentIssue false

Naomi Oreskes, Why Trust Science? (Princeton, NJ: Princeton University Press, 2019). 376 pages. ISBN: 9780691179001. Hardcover $24.95. - Garret Christensen, Jeremy Freese, and Edward Miguel, Transparent and Reproducible Social Science Research: How to Do Open Science (Berkeley: University of California Press, 2019). 272 pages. ISBN: 9780520296954. Paperback $34.95.

Review products

Naomi Oreskes, Why Trust Science? (Princeton, NJ: Princeton University Press, 2019). 376 pages. ISBN: 9780691179001. Hardcover $24.95.

Garret Christensen, Jeremy Freese, and Edward Miguel, Transparent and Reproducible Social Science Research: How to Do Open Science (Berkeley: University of California Press, 2019). 272 pages. ISBN: 9780520296954. Paperback $34.95.

Published online by Cambridge University Press:  12 June 2020

Alexander Wuttke*
Affiliation:
University of Mannheim
*
Correspondence: Alexander Wuttke, Email: alexander.wuttke@uni-mannheim.de

Abstract

Type
Book Review
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s) 2020. Published by Cambridge University Press on behalf of Association for Politics and the Life Sciences

The trustworthiness of scientific findings is at the center of current scholarly and public debates. The contestation of scientific knowledge claims by critics outside science and by reform movements within science is a welcome reason to take a break from our everyday tasks as scientists and to reflect upon our doings as professional truth seekers. It is no wonder that psychological science in particular—where the “replication crisis” has shattered not only established literatures but also the identities of many researchers—shows a surge in meta-scientific interest. Empirical scholars who usually bother with t-tests and sample recruitment are now revisiting fundamental questions of scientific inquiry: on what grounds can we claim that scientific findings deserve trust? What makes the “scientific method” a superior gateway to knowledge, and what constitutes that “scientific method” in the first place?

Two new, timely books reconsider these questions. In Why Trust Science? Naomi Oreskes condenses philosophical accounts of whether and how scholarly inquiry can discover truths about the world. In Transparent and Reproducible Social Science Research: How to Do Open Science, Garret Christensen, Jeremy Freese, and Edward Miguel examine the practice of scholarly inquiry, describing what needs to be done so that the reality of scientific knowledge creation more closely aligns with philosophical ideals. Together, these books demonstrate the range of what we can learn from the new wave of “research on research,” both as curious citizens and as academic scholars.

Oreskes, a Harvard University professor, is famous for a study on climate change (Oreskes, Reference Oreskes2004), in which she employs consensus analysis of the scientific literature to establish the now oft-cited fact that agreement exists among scientific experts on the presence of anthropogenic climate change. In Why Trust Science? Oreskes picks up on the concept of “consensus” to argue that science’s social nature is key to its success and legitimacy.

The book takes the reader on an intellectual expedition into the epistemological foundations of science. It is one of those texts that require the reader to take a pause after each page to scribble down all the new ideas that challenge or enrich one’s views of the world. For instance, the book rebuts common ideas about the informational value of single experiments. Many scholars are aware of the impossibility of verifying a theory but believe in scientific progress through theory falsification. Yet, as any scholar has discovered whose studies have yielded hypothesis-incongruent results, in contrast with the idea of a crucial experiment, null results rarely or never compel the conclusive refutation of a theory because any particular set of results is consistent with numerous explanations, including errors in measurement or study administration. In this vein, the principle of underdetermination discussed by Oreskes suggests that any empirical test helps confirm or weaken a theory and its accompanying auxiliary hypotheses merely in a probabilistic sense, suggesting a much more confined informational value of empirical findings than is sometimes stated. Insights like these are representative of Oreskes’s general approach, which challenges widespread Popperian ideas and, at first sight, seems to paint a bleak picture of science’s explanatory power.

Oreskes guides the reader through hundreds of years of philosophy of science, weaving the different threads of thought together to convince one that there is no such thing as a scientific method. Science merely denotes the range of behavior that the collective of academic experts considers appropriate for approaching an unresolved question. Complicating matters for the legitimacy of science, there is therefore no “magic key,” no single method that could establish the superiority of science over other means of knowledge acquisition. Because a “scientific method” does not exist, there is thus also no standard that could ensure the validity of any particular knowledge claim. Hence, Oreskes considers the dream of positive knowledge unattainable, and not even science can help us to ever know something for sure.

Liberating scientific inquiry from expectations that it is deemed to fail enables Oreskes to make a modest but persuasive case for the trustworthiness of science. Even though there is no magic key to positive knowledge, and even though previous experience indicates that today’s scientific truth claims will likely be considered imperfect or outright false in the future, Oreskes holds that there are good reasons for trust in science. What characterizes the epistemic value of science, according to Oreskes, is scientists’ sustained engagement with the world and science’s social character; like a plumber who deals with pipes day in and day out, a neuroscientist has likely engaged more thoroughly with the brain than most other individuals and thus has more knowledge about the appropriate research methods to examine a particular question about the human brain. Moreover, the neuroscientist’s findings are vetted and potentially revised by other scholars with field expertise. Hence, all academic findings are ultimately yielded through social interactions, which make them more robust against individual biases and oversights. Altogether, while Oreskes’s characterization of science’s epistemic qualities is more profane than we might have hoped for, the demystification of science makes for an honest and pragmatic case for the value of scientific inquiry.

Still, science has no exclusive right to credible knowledge claims, and any scientific finding may turn out to be wrong. Hence, while deep substance knowledge and social vetting foster the trustworthiness of scientific evidence, some scientific truth claims are more credible than others. Here, Oreskes picks up on her earlier work to argue that scientific knowledge claims are most credible when they represent a consensus of field experts. What is more, credibility is greatest when the scientific consensus is established through open discourse that considers the input of diverse participants with different points of view.

Oreskes’s book is authoritative when summarizing the long line of philosophical thinking on science, but it is less convincing when discussing current meta-scientific work on the actual practice of contemporary science. In one of the chapters replying to her main arguments, political psychologist John Krosnick points to science’s current “replication crisis.” He discusses structural biases in the academic incentive structure that would have the consequence of repeatedly producing false scholarly consensus because the scientific literature rarely presents the entire, available evidence. However, in her response, Oreskes brushes most of Krosnick’s concerns about systemic threats to the credibility of scientific findings away, despite a rapidly growing body of evidence to the contrary. For instance, she suggests that discussions about structural shortcomings in replicability and transparency were confined mostly to psychology or biomedicine (pp. 228, 230) and that replication failures solely refuted single studies but not established phenomena representing scholarly consensus (p. 233f).

These claims do not reflect the current state of meta-scientific research. While the intensity differs across academic fields, it is hard to find a discipline in which the systemic lack of replicability and transparency is not being discussed (Baker, Reference Baker2016; Christensen et al., Reference Christensen, Wang, Paluck, Swanson, Birke, Miguel and Littman2019). “Open science” is a current topic in diverse disciplines such as dinosaur paleontology (Tennant & Farke, Reference Tennant and Farke2019), ecology (Powers & Hampton, Reference Powers and Hampton2019), and communication and political science (Dienlin et al., Reference Dienlin, Johannes, Bowman, Masur, Engesser, Kümpel, Lukito, Bier, Zhang, Johnson, Huskey, Schneider, Breuer, Parry, Vermeulen, Fisher, Banks, Weber, Ellis and Vreese2020; Wuttke, Reference Wuttke2019). Also, while Oreskes is right to remind advocates of open science reforms like myself that robust empirical evidence on these topics is still scarce, it is not true that recent meta-scientific research has only upset singular studies, but it has also fundamentally challenged textbook phenomena that previously had widespread support in the literature, such as ego depletion (Friese et al., Reference Friese, Loschelder, Gieseler, Frankenbach and Inzlicht2019) and social priming (Chivers, Reference Chivers2019). Notably, while Oreskes perceives these failed replications as evidence for the functioning self-correction of science, she overlooks that scholars needed and still need to overcome structural hurdles in attempts to publish replications that refute conventional scholarly wisdom (Goldacre et al., Reference Goldacre, Drysdale, Dale, Milosevic, Slade, Hartley, Marston, Powell-Smith, Heneghan and Mahtani2019). Most crucially, therefore, while Oreskes makes the general point that we should be concerned about the epistemic value of consensus if “evidence is being discounted or being weighted asymmetrically” (p. 141), she disregards the arguments and evidence for the relevance of structural biases in contemporary science despite academia’s incentive structure that favors novel, clean, and exciting results and disregards replication evidence, null results, or complex findings that do not tell a simple story.

Problems arising from these incentive structures, potential remedies, and practical implications for individual researchers are discussed in another new book with a meta-scientific focus. With Transparent and Reproducible Social Science Research: How to Do Open Science, Christensen, Freese, and Miguel provide the first book-length primer on contemporary open science debates in the social sciences. Theoretically and empirically, the book highlights selected examples from the burgeoning meta-scientific literature to illustrate how the scientific reward system implemented by journals or hiring committees promotes publication bias, p-hacking, and selective reporting. For instance, they describe a study by Franco et al. (Reference Franco, Malhotra and Simonovits2014) of a National Science Foundation–sponsored competition among scholars that allowed 221 survey projects to be fielded free of charge. The analysis of the entire population of fielded research projects reveals that results were three times more likely to find their way into the scientific record when the obtained findings were clean and consistent with a priori hypotheses. In the case of null or mixed results, scholars often failed to guide the study through peer review or did not even try, so that, in effect, the scholarly community is unlikely to find out about these results.

The book by Christensen, Freese, and Miguel thus shows that scholarly norms and scientific gatekeepers intentionally or unintentionally cause a structural underrepresentation of particular studies, depending on the research results. Because not everything that scientists discover about the world has an equal chance of getting published, the published literature paints a distorted picture of the accumulated knowledge. These biases are a problem for anyone who wants to examine the published literature to determine whether a consensus exists among academic experts on a particular question.

As remedies, the book proposes preregistration to prevent specification search and sensitivity analyses to reveal the entire evidence underlying a knowledge claim. If the authors’ diagnosis of incentive-induced structural biases in the academic literature is correct, and if the suggested tools are adequate remedies, then we would expect the composition of scientific literatures to change after modifying the incentive structure and implementing bias-mitigating reforms. While not yet conclusive, recent studies provide empirical support for this proposition, showing that minimizing opportunities for specification search and rewarding sound questions and methods independent of the study’s outcome affect the composition of published findings such as the share of hypothesis-refuting results (Kaplan & Irvin, Reference Kaplan and Irvin2015; Kvarven et al., Reference Kvarven, Strømland and Johannesson2019; Murray et al., Reference Murray, Compte, Quintana, Mitchison, Griffiths and Nagata2020; Scheel et al., Reference Scheel, Schijen and Lakens2020). While norms and incentive structures, therefore, seem key to understanding and overcoming biases and thus to increase the epistemological value of researcher consensus in the published literature, the book also provides practical advice for individual researchers who want to make their studies more informative.

The book offers guidance for opening up one’s research process by publishing data, programming intelligible code, and conforming with reporting standards. In this vein, the book’s advice and the aspirations of the open science movement on which it draws can be understood as an attempt to enable other researchers to meaningfully vet and potentially revise scientific claims. In other words, by being transparent in all phases of a research project, by making the entire evidence accessible, by sharing material and documenting methods, the open science practices illustrated in the book create the conditions that help to realize the epistemological potential of science’s social character that Oreskes has identified (Elman et al., Reference Elman, Kapiszewski and Lupia2018). Hence, despite differences in the appraisal of current meta-scientific findings, the two books share a common understanding of the fundamental prerequisites for credible science.

Moreover, both books consider science as generally trustworthy while at the same time emphasizing the provisional nature of any single study. Potentially explaining why the books interpret current meta-scientific findings differently, some empirical scholars who believe in our ability to resolve unsettled questions if we only work hard enough on a research design have had to learn the hard way about the provisional nature of all scientific findings. Philosophers and historians of science, in contrast, may not be as surprised about low replicability rates, as they have always been aware that all scientific knowledge claims are incomplete and therefore will ultimately require revision or refutation.

Finally, these two books and the new wave of meta-science research remind us that scientific findings are not yielded through godly spark or an invincible “scientific method.” Rather, scientific findings are human creations and the rules by which we produce and evaluate these findings are also social products. Both books thus remind us to, occasionally, take a moment for self-reflection to question what we are doing as individual researchers or as a scientific community and whether we should overcome certain habits to get better at our job of explaining the world around us.

References

BakerM. (2016). 1,500 scientists lift the lid on reproducibility. Nature News, 533(7604), 452. https://doi.org/10.1038/533452aCrossRefGoogle ScholarPubMed
ChiversT. (2019). What’s next for psychology’s embattled field of social priming. Nature, 576(7786), 200202. https://doi.org/10.1038/d41586-019-03755-2CrossRefGoogle ScholarPubMed
ChristensenG., WangZ., PaluckE. L., SwansonN., BirkeD. J., MiguelE., & LittmanR. (2019). Open science practices are on the rise: The State of Social Science (3S) Survey. https://doi.org/10.31222/osf.io/5rksuCrossRefGoogle Scholar
DienlinT., JohannesN., BowmanN. D., MasurP. K., EngesserS., KümpelA. S., LukitoJ., BierL. M., ZhangR., JohnsonB. K., HuskeyR., SchneiderF. M., BreuerJ., ParryD. A., VermeulenI., FisherJ. T., BanksJ., WeberR., EllisD. A., … VreeseC. de. (2020). An agenda for open science in communication. Journal of Communication, 17(5), 1. https://doi.org/10.1093/joc/jqz052Google Scholar
ElmanC., KapiszewskiD., & LupiaA. (2018). Transparent social inquiry: Implications for political science. Annual Review of Political Science, 21, 2947. https://doi.org/10.1146/annurev-polisci-091515-025429CrossRefGoogle Scholar
FrancoA., MalhotraN., & SimonovitsG. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 15021505. https://doi.org/10.1126/science.1255484CrossRefGoogle ScholarPubMed
FrieseM., LoschelderD. D., GieselerK., FrankenbachJ., & InzlichtM. (2019). Is ego depletion real? An analysis of arguments. Personality and Social Psychology Review, 23(2), 107131. https://doi.org/10.1177/1088868318762183CrossRefGoogle Scholar
GoldacreB., DrysdaleH., DaleA., MilosevicI., SladeE., HartleyP., MarstonC., Powell-SmithA., HeneghanC., & MahtaniK. R. (2019). Compare: A prospective cohort study correcting and monitoring 58 misreported trials in real time. Trials, 20(1), 116. https://doi.org/10.1186/s13063-019-3173-2CrossRefGoogle Scholar
KaplanR. M., & IrvinV. L. (2015). Likelihood of null effects of large NHLBI clinical trials has increased over time. PLOS ONE, 10(8), e0132382. https://doi.org/10.1371/journal.pone.0132382CrossRefGoogle ScholarPubMed
KvarvenA., StrømlandE., & JohannessonM. (2019). Comparing meta-analyses and preregistered multiple-laboratory replication projects. Nature Human Behaviour. https://doi.org/10.1038/s41562-019-0787-zCrossRefGoogle Scholar
MurrayS. B., CompteE. J., QuintanaD. S., MitchisonD., GriffithsS., & NagataJ. M. (2020). Registration, reporting, and replication in clinical trials: The case of anorexia nervosa. International Journal of Eating Disorders, 53(1), 138142. https://doi.org/10.1002/eat.23187CrossRefGoogle ScholarPubMed
OreskesN. (2004). Beyond the ivory tower: The scientific consensus on climate change. Science, 306(5702), 1686. https://doi.org/10.1126/science.1103618CrossRefGoogle ScholarPubMed
PowersS. M., & HamptonS. E. (2019). Open science, reproducibility, and transparency in ecology. Ecological Applications, 29(1). https://doi.org/10.1002/eap.1822CrossRefGoogle ScholarPubMed
ScheelA. M., SchijenM., & LakensD. (2020). An excess of positive results: Comparing the standard psychology literature with registered reports. PsyArXiv. https://psyarxiv.com/p6e9c/CrossRefGoogle Scholar
TennantJ., & FarkeA. (2019). Open science in dinosaur paleontology. PaleorXiv. https://paleorxiv.org/wzfpb/CrossRefGoogle Scholar
WuttkeA. (2019). Why too many political science findings cannot be trusted and what we can do about it: A review of meta-scientific research and a call for academic reform. Politische Vierteljahresschrift, 60(1), 119. https://doi.org/10.1007/s11615-018-0131-7CrossRefGoogle Scholar