Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-26T04:10:07.691Z Has data issue: false hasContentIssue false

Certified Amplification: An Emerging Scientific Norm and Ethos

Published online by Cambridge University Press:  30 May 2022

Carole J. Lee*
Affiliation:
Department of Philosophy, University of Washington, Seattle, WA, US
*
Email: c3@uw.edu
Rights & Permissions [Opens in a new window]

Abstract

Merton envisioned his norms of science at a time when peer-reviewed journals controlled scientific communication. Technologies for sharing and finding content have since divorced the certification and amplification of science, generating systemic vulnerabilities. Certified amplification—a new Mertonian-styled norm—enjoins their recoupling and introduces a taxonomy of strategies adopted by institutions to close the certification-amplification gap, including the proportioning of the one to the other. Examples illustrating each taxonomic type collectively paint a picture of an ethos employing a rich range of certification and amplification techniques and emerging in a decentralized fashion across heterogeneous objects, communication modalities, and institutions.

Type
Symposia Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

Robert Merton envisioned his norms of science at a time when peer-reviewed journals controlled scientific communication. Now, thanks to technologies for sharing and finding information online, scientists can archive and establish priority for their research on preprint platforms and amplify their work on blogs and social networks without gatekeeping, creating new dynamics for capturing visibility and annexing broader audiences. With these radical disruptions to science’s communication structures, what happens to its normative structure and processes for self-governance?

In this paper, I propose certified amplification as a new Mertonian-styled norm and ethos. First, I argue that, under Merton’s framework, the norms should be understood as serving an instrumental role towards the extension of certified knowledge. This meta-normative perspective opens up the possibility that, had the institutional circumstances and challenges been different, science’s norms would have needed to be different as well. I will then argue that current technologies generate new systemic vulnerabilities for the extension of certified knowledge due to the divorcing of certification and amplification. Certified amplification—a norm enjoining their recoupling—addresses this problem. I will articulate a taxonomy of different strategies institutions have adopted to close the certification-amplification gap. The examples used to illustrate each taxonomic type collectively paint a picture of an ethos employing a rich range of certification and amplification techniques and emerging in a decentralized fashion across a heterogeneous range of objects, communication modalities, and institutional contexts.

2. A Mertonian meta-norm

Merton’s norms of science are widely endorsed by scientists (Anderson, Martinson, and De Vries Reference Anderson, Martinson and De Vries2007) and are thought to express a kind of “moral consensus” (Merton Reference Merton and Norman1973, 169). Disinterestedness is a “distinctive pattern of institutional control” under which scientists systematically scrutinize each others’ claims (Merton Reference Merton and Norman1973, 273). Organized skepticism calls for scrutinizing beliefs without the influence of religious, economic, and political institutions (Merton Reference Merton and Norman1973, 277). Universalism requires evaluating claims according to “preestablished impersonal criteria” rather than on “the personal or social attributes of their protagonist” (Merton Reference Merton and Norman1973, 270). And communism—a norm Harriet Zuckerman and Merton took to be technologically and socially institutionalized by the first peer-reviewed journal—“prescribes the open communication of findings to other scientists” (Zuckerman and Merton Reference Zuckerman and Merton1971, 69).

However, for Merton, the norms also “possess a methodologic rationale” (Merton Reference Merton and Norman1973, 270). The norms are “procedurally efficient” means for achieving science’s ultimate “institutional goal”—namely, “the extension of certified knowledge” (Merton Reference Merton and Norman1973, 270; italics mine), that is, knowledge claims which are both “socially shared and socially validated” (Merton Reference Merton1968, 59). This instrumental logic can be seen in Merton’s characterization of the norms as bulwarks against perceived threats to scientific progress. Disinterestedness prevents “individuals from profiting through spurious claims, thereby decreasing the rate of fraud found among scientists compared to other professionals” (Merton Reference Merton and Norman1973, 277). Organized skepticism protects the evaluation of scientific claims against resistance from organized religion as well as social and political groups (Merton Reference Merton and Norman1973, 278). Universalism “ensures that individual contributions abiding by the technical norms of empirical evidence and logical consistency are recognized for their advancement of science” without social, political, or nationalistic bias (Merton Reference Merton and Norman1973, 277). And communism “stands in contrast to the privatization of knowledge in a capitalistic economy that would prevent a shared and more efficiently developed body of knowledge” (Merton Reference Merton and Norman1973, 277). These norms of science appear to earn their status by conforming to a meta-norm: that is, that they are instrumental towards the extension of certified knowledge.

Abstracting to this meta-normative level is critical because it allows us to understand that—had the institutional circumstances and challenges been different—science’s norms would have needed to be different as well. The contingency of Merton’s norms can be appreciated by contextualizing them historically: when he introduced them in the 1940s, “totalitarian states seemed to threaten both democracy and science” (Csiszar Reference Csiszar, Biagioli and Lippman2020, 11). We can see this preoccupation throughout. Merton contrasts the norm of universalism against examples of intellectually dishonest, nationalistic acts by scientists during World War I. His discussion of disinterestedness censures “presumably scientific pronouncements of totalitarian spokesmen on race or economy or history” (Merton Reference Merton and Norman1973, 277). Organized skepticism protects science from the “anti-rationalism” of “modern totalitarian society” (Merton Reference Merton and Norman1973, 278). And communism weighs in on the conflict between socialist versus capitalist systems for sharing and rewarding intellectual property.

If the norms of science are to be instrumental towards the extension of knowledge, then they need to be continually reevaluated and reimagined in the face of new social, political, economic, technological, and legal challenges to scientific progress. Like Merton, we face a time where “attacks upon the integrity of science have led scientists to recognize their dependence on particular types of social structure” (Merton Reference Merton and Norman1973, 267). The influences we worry about, however, are shaped by technologies and institutions with newly configured affordances, dynamics, incentives, and dangers, cleaving the contours for a new norm.

3. The contemporary decoupling of certification and amplification

Under Zuckerman and Merton’s narrative,Footnote 1 the introduction of peer-reviewed journal publication gave rise to a beneficial system whereby members of the Royal Society—in their triple-roles as reviewers, authors, and readers of the Philosophical Transactions—protected the quality of published work as a means for burnishing authors’ individual reputations, the journal’s imprimatur, and the Society’s status (Zuckerman and Merton Reference Zuckerman and Merton1971). However, scientific communication and amplification have since evolved in ways that radically decouple certification, public disclosure, and amplification.

Although the preprint platform arXiv has been available since the early 1990s, preprint platforms (and their submissions) have proliferated over the last decade. Most preprint platforms are owned by non-profit academic groups or organizations (e.g., bioRxiv, medRxiv, the suite of “Rxiv” products using infrastructure provided by the Center for Open Science) or by for-profit publishers (e.g., Springer Nature, Elsevier, Wiley) (Kirkham et al. Reference Kirkham, Penfold, Fiona Murphy, Ioannidis, Polka and Moher2020). Preprints allow scientists to claim priority without the delays associated with peer-reviewed publication, archive content in searchable ways, and are often citable in downstream articles and grant applications via assigned Digital Object Identifiers (DOIs) (Kirkham et al. Reference Kirkham, Penfold, Fiona Murphy, Ioannidis, Polka and Moher2020).Footnote 2

A number of innovations in this space are moving towards permitting the certification of preprints either through: an internally structured peer review process with its own community of peer reviewers (Peer Community In); an external peer review service that displays reviews alongside original articles (e.g., PREreview); direct transfer to a journal that will conduct its own peer review (e.g., from bioRxiv or medRxiv to partnering journals); or a publisher’s en suite service where in-house preprint submission serves simultaneously as journal submission (e.g., F1000, In Review for select Springer Nature journals). However, in all of these cases, dissemination precedes certification, marking a reversal of the institutional processes described in Zuckerman and Merton’s normative model of science.Footnote 3

The gap between the amplification of a scientist’s claims and the certification processes used to vet them is widening due to the exponentially increasing volume of scientific research. The Philosophical Transactions began as a monthly periodical (Moxham Reference Moxham2015). Since the beginning of the twentieth century, the volume of peer-reviewed journal articles has grown exponentially, with a twofold increase every twelve years (Dong et al. Reference Dong, Ma, Shen and Wang2017). By 1968, Merton had already reported that scientists were concerned about their work getting lost in “the flood of published scientific research” (59). Merton recognized that “the vastly increased bulk of publication stiffens the competition between papers for” attention and uptake, which he anticipated would increase the “frequency and intensity” of the Matthew Effect (Merton Reference Merton1968, 59).Footnote 4 Indeed, current research finds that citations have become more centralized than decentralized (Kim et al. Reference Kim, Portenoy, West and Stovel2020), with disparities between social groups increasing rather than decreasing over time (Dworkin et al. Reference Dworkin, Linn, Teich, Perry Zurn and Bassett2020; Bertolero et al. Reference Bertolero, Dworkin, David, López Lloreda, Srivastava, Stiso and Zhou2020). As he predicted, “social mechanisms that curb or facilitate the incorporation of would-be contributions into the domain of science” (Merton Reference Merton1968, 60) are leading to increasingly skewed and unjust allocations of influence and recognition.

Cross-platform reinforcement mechanisms further accelerate disparities in reach and impact. For example, papers that are covered by the media (Phillips et al. Reference Phillips, Kanter, Bednarczyk and Tastad1991), mentioned on social media (Yan and Gerstein Reference Yan and Gerstein2011), and shared as preprints (Fraser et al. Reference Fraser, Momeni, Mayr and Peters2020) garner more citations; conversely, citation rates likely inform algorithms driving social media news feeds, follow recommendation systems, publishers’ article suggestions, and search engine results (e.g., Google Scholar), thereby accentuating existing disparities in visibility (West and Bergstrom Reference West and Bergstrom2021).

Finally, the increasing certification-amplification gap creates social and political costs that threaten to obstruct the scientific enterprise. Although Merton’s “communication networks of science” (Merton Reference Merton1968, 56) were conceived as populations of professional scientists, there is an increasing appreciation that “a key element in how science thrives and flourishes” is through successful communication with policy makers and members of the public (National Academies of Sciences, Engineering, and Medicine 2020, 16). Social media has blurred boundaries between public and private relationships (boyd Reference boyd and Papacharissi2010) and connected individuals via ties spanning larger network distances (Bak-Coleman et al. Reference Bak-Coleman, Alfano, Barfuss, Bergstrom, Centeno, Couzin and Donges2021), bringing the “scientist-layman relation” to the fore (Merton Reference Merton and Norman1973, 277). While this enables earnest efforts to share expert information to the public, it also engenders the “abuse of expert authority and the creation of pseudo-sciences” (Merton Reference Merton and Norman1973, 277), including unfounded claims propagated by Nobel Prize winners (Boodman Reference Boodman2021) and sensationalist science (Havstad Reference Havstad2021). Highly broadcast and destabilizing misinformation—about, for example, COVID-19 and climate change—pose an existential threat to the very democratic enterprise thought to provide the “institutional context for the fullest measure of [scientific] development” (Merton Reference Merton and Norman1973, 270).

4. Certified amplification as emerging norm and ethos

Merton’s claims about the normative structure of science—and especially Zuckerman and his claims about its emergence—were grounded in the institutionalization of peer-reviewed journals. As such, they foregrounded an assemblage in which the direct object for the certification of knowledge was publication. However, now that scientists have the technological and social means for sharing their claims without gatekeeping, public disclosure is a trivial step compared to what should be understood as the proper object of certification: namely, the amplification of science, i.e., the spread of knowledge claims across individuals and their generated content (e.g, papers, citations, retweets, comments, policy statements).Footnote 5

This shift in focus—away from publication to the more general phenomenon of amplification—can be made to accord with Merton’s views by appealing to his conceptualization of what it means to make a “contribution to science” (Merton Reference Merton1968, 59). “[F]or science to be advanced, it is not enough that fruitful ideas be originated or new experiments developed or new problems formulated or new methods instituted” (Merton Reference Merton1968, 59). Instead, “[f]or the development of science, only work that is effectively perceived and utilized by other scientists, then and there, matters” (Merton Reference Merton1968, 59–60). This requires that scientists outcompete others in “the flow of ideas and findings through the communication networks of science” (Merton Reference Merton1968, 56). Certified knowledge, then, is not simply “socially shared and socially validated” (Merton Reference Merton1968, 59)—it is also amplified to some degree.

I propose certified amplification as a contemporary, Mertonian-styled norm of science, which addresses the decoupling of certification and amplification by enjoining their recoupling. Certified amplification is related to disinterestedness insofar as it prescribes the “accountability of scientists to their compeers” through “the exacting scrutiny of fellow experts” (Merton Reference Merton and Norman1973, 276) and recognizes that institutions serve a critical role in facilitating this practice. However, rather than enjoin “the exacting scrutiny of fellow experts” tout court (Merton Reference Merton and Norman1973, 276), certified amplification, as a set of practices, centers the ways that degrees and types of certification and amplification can vary as a function of each other. Certified amplification is also related to organized skepticism, insofar as certification rejects totalitarian-style interference in the evaluation of scientific claims. However, by foregrounding the audiences across which amplification takes place, certified amplification provides a lens for understanding the intellectual and political value of certification across different constituents of the scientific enterprise.Footnote 6

To trace certified amplification’s emergence as an ethos, I will look to the “prescriptions, proscriptions, preferences, and permissions” expressed in practices and statements of “institutional values” (Merton Reference Merton and Norman1973, 269) since these drive the evaluative and information communication technology choices shaping the affordances and dynamics of certification and amplification at scale.Footnote 7 I will organize exemplars of this ethos around different functional approaches to recoupling certification and amplification. There is variation in amplification’s populations (e.g., communication networks of scientists, policy-makers, the public) and modality (e.g., journals, preprint platforms, search engines, article recommendation services, news, social media). There is also diversity in certification’s agents (e.g., journals, national and international health organizations, external peer review services, individual scientists through their citation and social media choices), means (e.g., peer review; open data, code, methods, and materials; pre-registration; replication; reproduction), and target objects (e.g., manuscripts, registered reports,Footnote 8 data, individual people, claims broadcast by news and social media). And some acts of amplification can simultaneously serve as acts of certification (e.g., citations, replications, social media comments). However, the common ethic to recouple certification and amplification is made more remarkable for its decentralized surfacing across a heterogeneous range of objects, institutional contexts, and communication modalities.

4.1 Amplification Conditioned on Certification

This approach gate-keeps amplification. It was mainstreamed by peer-reviewed journal publication but also appears under different guises across the current scientific landscape. DOIs are assigned to peer-reviewed journal articles as well as to preprints recommended by peer review services (e.g., Peer Community In) and curated by overlay journals (e.g., Open Journal of Astrophysics).Footnote 9 Peer Community In Registered Reports (PCI RR) orchestrates peer review for registered reports and for the following study write ups, where successful manuscripts can be published in any of the twenty-two journals that have agreed to accept articles from this service without further review (PCI RR 2021). Science journalists conduct “informal peer review” by contacting domain-experts to evaluate preprints before deciding whether to cover them (Ordway Reference Ordway2020). And the National Library of Medicine launched the National Institutes of Health (NIH) Preprint Pilot, which makes only preprints written by folks previously vetted via NIH intramural hiring or extramural funding publicly searchable and available on PubMed (PubMed Central 2021). As in the case of journal peer review (McNutt Reference McNutt2019; Zuckerman and Merton Reference Zuckerman and Merton1971), amplification conditioned on certification is a binary mechanism—either content gets amplified or not—and can be used to protect the imprimatur of the institution(s) using their platforms to amplify research.

4.2 Amplification Proportionate to Certification

In this strategy, the degree to which some content gets amplified depends on the degree to which it has been certified, where certification can be an ongoing process. It is thought that citations and other reputation-and-authority metrics inform black-boxed algorithms that drive search engine and article recommendation results (Jensen Reference Jensen2007; West and Bergstrom Reference West and Bergstrom2021). Some have proposed systems for communicating research that aggregate reader assessments and prioritize attention by popularity akin to models used by Reddit, Slashdot, and Stack Exchange (Nosek and Bar-Anan Reference Nosek and Bar-Anan2012; Tennant et al. Reference Tennant, Dugan, Graziotin, Jacques, Waldner, Mietchen and Elkhatib2017). The National Academies of Medicine launched a project to identify principles and attributes for identifying credible sources of health information on social media platforms, which—in the future—could be used to amplify content based on credibility cues such as citations, peer reviewed work, conflict of interest disclosures, and credibility attributes (Kington et al. Reference Kington, Stacey Arnesen, Curry, Lazer and Villarruel2021). Social media platforms have made some efforts to fight misinformation by deplatforming individuals and blocking hashtags (e.g., #VaccinesKill) promoting discredited information (De Vynck Reference De Vynck2021). All of these examples involve amplification tools designed and controlled by (or modeled by) private companies, raising challenges related to public-private coordination, oversight, and transparency (Kington et al. Reference Kington, Stacey Arnesen, Curry, Lazer and Villarruel2021).

4.3 Amplification with Certification Signals

Other efforts recouple certification with amplification through changes that ensure that content—however it is discovered—simultaneously delivers credibility cues. For example, journals (e.g., eLife) and preprint review services (e.g., Peer Community In, PREreview, Review Commons) publish reviewer comments alongside articles. Journals (McNutt Reference McNutt2019) and preprint platforms (Soderberg, Errington, and Nosek Reference Soderberg, Errington and Nosek2020) can publish statements alongside their articles about open science elements (data, materials, methods, code, and pre-registration of analysis plans) (Aalbersberg et al. Reference Aalbersberg, Appleyard, Brookhart, Carpenter, Clarke, Curry and Dahl2018) as well as replications, reproductions, statistical rigor and checks, and conflict of interest declarations (McNutt Reference McNutt2019). The credibility of individual articles can also be signaled by “forward linking” to later replications that contextualize a result and by having editorial expressions of concern and retractions flagged across “indexing services (e.g., PubMed, Google Scholar, and DOI-registration agencies) and downstream elements, such as citations in derivative work” (Jamieson et al. Reference Jamieson, McNutt, Kiermer and Sever2019, 19234). Certification, in this last example, is a continuous process and requires cross-platform coordination (Jamieson et al. Reference Jamieson, McNutt, Kiermer and Sever2019). Perhaps in the future, credibility elements attached to articles and other sources of scientific information could inform procedures for amplifying content proportionate to its certification.

4.4 Certification Proportionate to Amplification

In this strategy, the degree to which some content gets certified depends on the degree to which it has (or is expected to be) amplified. Some have argued that replication efforts should be focused on studies with high replication value, which can be measured by “the citation impact of a finding and the precision of the existing evidence of the effect” (Nosek, Spies, and Motyl Reference Nosek, Spies and Motyl2012, 622). The World Health Organization created its “Health Feedback” website, in which experts provided evaluations of scientific claims that had received high visibility in the news or on social media (World Health Organization 2021). Likewise, the Johns Hopkins (2019) Novel Coronavirus Research Compendium assesses emerging research on SARS-CoV-2 and COVID-19, prioritizing both “original, high-quality research for public health action”—an example of amplification conditioned on certification—and “papers receiving significant attention, regardless of quality”—an example of certification proportionate to amplification (Johns Hopkins Bloomberg School of Public Health 2021). The preprint platforms bioRxiv and medRxiv enhanced their screening processes to reject manuscripts that could fuel conspiracy theories about COVID-19 (Kwon Reference Kwon2020), effectively raising certification standards for content judged to have high potential for amplification. Note that in a regime where certification is made proportionate with amplification, scientific claims expected to have low amplification value would require less scrutiny, which could make more efficient use of limited reviewer time.

This taxonomy of certified amplification types and their exemplars should not be interpreted as being exhaustive. Nor should the taxonomy be taken to imply incompatibility or competition between approaches. A single platform can embrace all of these features at once: e.g., the PLOS family of journals employs pre-publication peer review (amplification conditioned on certification), publishes peer review reports and reader comments (amplification with certification signals), and provides a widget for sharing on social media for further discussion and evaluation (certification proportionate to amplification and amplification proportionate to certification). But given the highly decentralized nature of scientific certification and amplification, it is even more critical for the extension of certified knowledge that these different types of certified amplification take place across platforms: e.g., a paper may get posted as a preprint with credibility-related statements about open data (amplification with certification signals), undergo peer review for journal publication (amplification conditioned on certification), get amplified on social media for the quality of its data (amplification proportionate to certification), and have its study and data more carefully scrutinized as its visibility increases (certification proportionate to amplification).

5. Conclusion

I have argued for certified amplification as a Mertonian-styled norm whose emergence as an ethos is exhibited by decentralized institutional efforts to recouple certification and amplification across heterogeneous objects, communication modalities, and institutional contexts. A number of open questions remain about: inequities and biases in whose contributions get amplified and certified by the scientific community (Buchanan et al. Reference Buchanan, Perez, Prinstein and Thurston2021; Krieger et al. Reference Krieger, Boyd, De Maio and Maybank2021; Dworkin et al. Reference Dworkin, Linn, Teich, Perry Zurn and Bassett2020; Bertolero et al. Reference Bertolero, Dworkin, David, López Lloreda, Srivastava, Stiso and Zhou2020); who should be in charge of certification and amplification standards and processes and why (McNutt, Córdova, and Allison Reference McNutt, Córdova and Allison2021); how degrees and types of certification and amplification should vary as a function of the social costs of error (Havstad Reference Havstad2021); public-private coordination, oversight, transparency, and regulation of algorithms amplifying content on internet and social media platforms (Kington et al. Reference Kington, Stacey Arnesen, Curry, Lazer and Villarruel2021; West and Bergstrom Reference West and Bergstrom2021); the co-opting and weaponization of markers for scientific credibility and certification by legislators (Levy and Johns Reference Levy and Merritt Johns2016) and counterpublics (Lee et al. Reference Lee, Tanya Yang, Inchoco and Satyanarayan2021); how to better support publicly-engaged scientific communication (Ordway Reference Ordway2020); incentives generated by different techniques for closing the certification-amplification gap (Nosek, Spies, and Motyl Reference Nosek, Spies and Motyl2012; Teixeira da Silva and Dobránszki Reference Teixeira da Silva and Dobránszki2015; Jamieson et al. Reference Jamieson, McNutt, Kiermer and Sever2019; Heesen and Kofi Bright Reference Heesen and Bright2020); remaining gaps between certification and amplification in practice; and richer conceptions of certification and amplification as well as new/updated norms to address the aforementioned challenges.

Finally, because institutions and structures are always changing, the normative structure of science and its ethos will continue to evolve. Of particular interest is the recent formation of the Strategic Council for Research Excellence, Integrity, and Trust—a new body within the National Academies of Sciences, Engineering, and Medicine—designed to provide a more centralized means for “anticipating threats to research integrity and streamlining and improving accountability throughout the research enterprise” (McNutt, Córdova, and Allison Reference McNutt, Córdova and Allison2021, 1). Its ambition is to “discuss, originate, and disseminate best practices, request creation of study committees to issue consensus reports on key issues, and form action collaboratives to implement recommendations” (McNutt, Córdova, and Allison Reference McNutt, Córdova and Allison2021, 1). By articulating a collective vision of how institutions can be structured to promote the extension of certified knowledge and organizing its implementation, the Strategic Council is poised to articulate the norms and ethos for tomorrow’s science.

Acknowledgments

Many thanks to the institutional structures that made this paper possible during the COVID-19 pandemic, including (but not limited to): the Philosophy of Science Association which delayed the conference for which this was written; my children’s school which reopened fully in the fall of 2021; and, my department chair, Andrea Woody, for helping divine time. Many thanks also to Liam Kofi Bright, Kevin Elliott, Sabina Leonelli, Felipe Romero, Patricia Soranno, and Alison Wylie for their insightful comments. Conflict of interest disclosures: I am a Coordinating Committee Member for the Transparency and Openness Promotion Guidelines and served as a judge for ASAPbio’s contest “Encouraging Preprint Curation and Review: A Design Sprint.”

Footnotes

1 Historical work has since suggested that routinized expert peer review began at the Transactions more than 150 years after its inception (Moxham and Fyfe Reference Moxham and Fyfe2018).

2 A DOI is a unique, unchanging alphanumeric string assigned to online content, making it easier to identify and retrieve for citations in manuscripts and social media mentions.

3 An exception is Review Commons, which can deposit refereed preprints to bioRxiv (and submit to affiliate journals).

4 In the Matthew Effect, successful scientists disproportionately accrue visibility and credit for their contributions while the less famous accrue disproportionately less.

5 Peer-reviewed journal articles may continue to be seen as a singular mode of communication for some purposes: for example, peer-reviewed journal articles are more likely to be cited than preprints in policy statements about COVID-19 (Yin et al. Reference Yin, Jian Gao and Wang2021).

6 I do not take conceptual overlap between certified amplification and Merton’s original norms to discount its plausibility. Merton himself proposed organized skepticism as a core norm despite its being “variously interrelated with the other elements of the scientific ethos” (Merton Reference Merton and Norman1973, 277).

7 In contrast, others have inferred updated Mertonian norms from interviews and surveys of scientists (e.g., Anderson et al. Reference Anderson, Ronning, De Vries and Martinson2010).

8 Registered reports describe experimental plans before study commencement.

9 Overlay journals curate collections of preprints and typically consist of links to accepted versions of papers hosted on the article’s originating preprint platform.

References

Aalbersberg, IJsbrand Jan, Appleyard, Tom, Brookhart, Sarah, Carpenter, Todd, Clarke, Michael, Curry, Stephen, Dahl, Josh, et al. 2018. “Making Science Transparent by Default: Introducing the TOP Statement.” OSF Preprints, submitted February 15, 2018. https://doi.org/10.31219/osf.io/sm78t.CrossRefGoogle Scholar
Anderson, Melissa S., Martinson, Brian C., and De Vries, Raymond. 2007. “Normative Dissonance in Science: Results from a National Survey of US Scientists.” Journal of Empirical Research on Human Research Ethics 2 (4):314.CrossRefGoogle ScholarPubMed
Anderson, Melissa S., Ronning, Emily A., De Vries, Raymond, and Martinson, Brian C.. 2010. “Extending the Mertonian Norms: Scientists’ Subscription to Norms of Research.” The Journal of Higher Education 81 (3):366–93.CrossRefGoogle ScholarPubMed
Bak-Coleman, Joseph B., Alfano, Mark, Barfuss, Wolfram, Bergstrom, Carl T., Centeno, Miguel A., Couzin, Iain D., Donges, Jonathan F., et al. 2021. “Stewardship of Global Collective Behavior.” Proceedings of the National Academy of Sciences 118 (27):e2025764118.CrossRefGoogle ScholarPubMed
Bertolero, Maxwell A., Dworkin, Jordan D, David, Sophia U, López Lloreda, Claudia, Srivastava, Pragya, Stiso, Jennifer, Zhou, Dale, et al. 2020. “Racial and Ethnic Imbalance in Neuroscience Reference Lists and Intersections with Gender.” BioRxiv, submitted October 12, 2020. https://doi.org/10.1101/2020.10.12.336230.CrossRefGoogle Scholar
Boodman, Eric. 2021. “He’s a Stanford Professor and a Nobel Laureate. Critics Say He Was Dangerously Misleading on Covid.” Stat, May 24, 2021. https://www.statnews.com/2021/05/24/stanford-professor-and-nobel-laureate-critics-say-he-was-dangerously-misleading-on-covid/.Google Scholar
boyd, danah. 2010. “Social Network Sites as Networked Publics: Affordances, Dynamics, and Implications.” In Networked Self: Identity, Community, and Culture on Social Network Sites, edited by Papacharissi, Zizi, 4766. New York, NY: Routledge.Google Scholar
Buchanan, NiCole T., Perez, Marisol, Prinstein, Mitch, and Thurston, Idia. 2021. “Upending Racism in Psychological Science: Strategies to Change How Science Is Conducted, Reported, Reviewed, and Disseminated.” American Psychologist 76 (7):1097–112.CrossRefGoogle ScholarPubMed
Csiszar, Alex. 2020. “Gaming Metrics Before the Game: Citation and the Bureaucratic Virtuoso.” In Gaming the Metrics, edited by Biagioli, Mario and Lippman, Alexandra, 3142. Cambridge, MA: The MIT Press.CrossRefGoogle Scholar
De Vynck, Gerrit. 2021. “YouTube Is Banning Prominent Anti-Vaccine Activists and Blocking All Anti-Vaccine Content.” The Washington Post, September 29, 2021. https://www.washingtonpost.com/technology/2021/09/29/youtube-ban-joseph-mercola/.Google Scholar
Dong, Yuxiao, Ma, Hao, Shen, Zhihong, and Wang, Kuansan. 2017. “A Century of Science: Globalization of Scientific Collaborations, Citations, and Innovations.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 14371446.Google Scholar
Dworkin, Jordan D., Linn, Kristin A., Teich, Erin G., Perry Zurn, Russell T. Shinohara, and Bassett, Danielle S.. 2020. “The Extent and Drivers of Gender Imbalance in Neuroscience Reference Lists.” Nature Neuroscience 23 (8):918–26.CrossRefGoogle ScholarPubMed
Fraser, Nicholas, Momeni, Fakhri, Mayr, Philipp, and Peters, Isabella. 2020. “The Relationship Between bioRxiv Preprints, Citations and Altmetrics.” Quantitative Science Studies 1 (2):618–38.Google Scholar
Havstad, Joyce C. 2021. “Sensationalist Science, Archaic Hominin Genetics, and Amplified Inductive Risk.” Canadian Journal of Philosophy. https://doi.org/10.1017/can.2021.15.Google Scholar
Heesen, Remco, and Bright, Liam Kofi. 2020. “Is Peer Review a Good Idea?” The British Journal for the Philosophy of Science 72 (3):635–63.CrossRefGoogle Scholar
Jamieson, Kathleen Hall, McNutt, Marcia, Kiermer, Veronique, and Sever, Richard. 2019. “Signaling the Trustworthiness of Science.” Proceedings of the National Academy of Sciences 116 (39):19231–36.CrossRefGoogle ScholarPubMed
Jensen, Michael. 2007. “The New Metrics of Scholarly Authority.” Chronicle of Higher Education, June 15, 2007. https://www.chronicle.com/article/the-new-metrics-of-scholarly-authority/.Google Scholar
Johns Hopkins Bloomberg School of Public Health. 2021. “Novel Coronavirus Research Compendium.” https://ncrc.jhsph.edu.Google Scholar
Kim, Lanu, Portenoy, Jason H., West, Jevin D., and Stovel, Katherine W.. 2020. “Scientific Journals Still Matter in the Era of Academic Search Engines and Preprint Archives.” Journal of the Association for Information Science and Technology 71 (10):1218–26.CrossRefGoogle Scholar
Kington, Raynard S., Stacey Arnesen, Wen-Ying Sylvia Chou, Curry, Susan J., Lazer, David, and Villarruel, Antonia M.. 2021. “Identifying Credible Sources of Health Information in Social Media: Principles and Attributes.” NAM Perspectives. https://doi.org/10.31478/202107a.CrossRefGoogle Scholar
Kirkham, Jamie J., Penfold, Naomi C., Fiona Murphy, Isabelle Boutron, Ioannidis, John P., Polka, Jessica, and Moher, David. 2020. “Systematic Examination of Preprint Platforms for Use in the Medical and Biomedical Sciences Setting.” BMJ Open 10 (12):e041849.CrossRefGoogle ScholarPubMed
Krieger, Nancy, Boyd, Rhea W., De Maio, Fernando, and Maybank, Aletha. 2021. “Medicine’s Privileged Gatekeepers: Producing Harmful Ignorance about Racism and Health.” Health Affairs Blog, April 20. doi: 10.1377/hblog20210415.305480.CrossRefGoogle Scholar
Kwon, Diana. 2020. “How Swamped Preprint Servers are Blocking Bad Coronavirus Research.” Nature 581 (7807):130–32.CrossRefGoogle ScholarPubMed
Lamont, Michèle. 2009. How Professors Think. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
Lee, Crystal, Tanya Yang, Gabrielle D. Inchoco, Graham M. Jones, and Satyanarayan, Arvind. 2021. “Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–18.Google Scholar
Levy, Karen EC, and Merritt Johns, David. 2016. “When Open Data Is a Trojan Horse: The Weaponization of Transparency in Science and Governance.” Big Data & Society 3 (1):16.CrossRefGoogle Scholar
McNutt, Marcia, Córdova, France A., and Allison, David B.. 2021. “The Strategic Council for Research Excellence, Integrity, and Trust.” Proceedings of the National Academy of Sciences 118 (41):e2116647118.CrossRefGoogle ScholarPubMed
McNutt, Marcia K. 2019. “What Is the Role of Journals in Promoting Trustworthy Research?” AGU Fall Meeting Abstracts 2019:U14E03.Google Scholar
Merton, Robert K. 1968. “The Matthew Effect in Science: The Reward and Communication Systems of Science are Considered.” Science 159 (3810):5663.CrossRefGoogle Scholar
Merton, Robert K. 1973. “The Normative Structure of Science.” In The Sociology of Science: Theoretical and Empirical Investigations, edited by Norman, W. Storer, 267–78. Chicago, IL: University of Chicago Press.Google Scholar
Moxham, Noah. 2015. “Fit for Print: Developing an Institutional Model of Scientific Periodical Publishing in England, 1665–ca. 1714.” Notes and Records: The Royal Society Journal of the History of Science 69 (3):241–60.CrossRefGoogle ScholarPubMed
Moxham, Noah, and Fyfe, Aileen. 2018. “The Royal Society and the Prehistory of Peer Review, 1665–1965.” The Historical Journal 61 (4):863–89.CrossRefGoogle Scholar
National Academies of Sciences, Engineering, and Medicine. 2020. The Endless Frontier: The Next 75 Years in Science. Washington, DC: The National Academies Press. https://doi.org/10.17226/25990.Google Scholar
Nosek, Brian A., and Bar-Anan, Yoav. 2012. “Scientific Utopia: I. Opening Scientific Communication.” Psychological Inquiry 23 (3):217–43.CrossRefGoogle Scholar
Nosek, Brian A., Spies, Jeffrey R., and Motyl, Matt. 2012. “Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability.” Perspectives on Psychological Science 7 (6):615–31.CrossRefGoogle ScholarPubMed
Ordway, Denise-Marie. 2020. “Covering Biomedical Research Preprints Amid the Coronavirus: 6 Things to Know.” The Journalist’s Resource: Informing the News, April 2, 2020. https://journalistsresource.org/health/medical-research-preprints-coronavirus/.Google Scholar
Peer Community In Registered Reports. 2021. “List of PCI RR-Friendly Journals.” https://rr.peercommunityin.org/about/pci_rr_friendly_journals.Google Scholar
Phillips, David P., Kanter, Elliot J., Bednarczyk, Bridget, and Tastad, Patricia L.. 1991. “Importance of the Lay Press in the Transmission of Medical Knowledge to the Scientific Community.” New England Journal of Medicine 325 (16):1180–83.CrossRefGoogle Scholar
PubMed Central. 2021. “NIH Preprint Pilot.” https://www.ncbi.nlm.nih.gov/pmc/about/nihpreprints/.Google Scholar
Soderberg, Courtney K., Errington, Timothy M., and Nosek, Brian A.. 2020. “Credibility of Preprints: An Interdisciplinary Survey of Researchers.” Royal Society Open Science 7 (10):201520.CrossRefGoogle ScholarPubMed
Teixeira da Silva, Jaime A., and Dobránszki, Judit. 2015. “Problems with Traditional Science Publishing and Finding a Wider Niche for Post-Publication Peer Review.” Accountability in Research 22 (1):2240.CrossRefGoogle ScholarPubMed
Tennant, Jonathan P., Dugan, Jonathan M., Graziotin, Daniel, Jacques, Damien C., Waldner, François, Mietchen, Daniel, Elkhatib, Yehia, et al. 2017. “A Multi-Disciplinary Perspective on Emergent and Future Innovations in Peer Review.” F1000Research 6:1151. https://doi.org/10.12688/f1000research.12037.3.CrossRefGoogle ScholarPubMed
West, Jevin D., and Bergstrom, Carl T.. 2021. “Misinformation in and About Science.” Proceedings of the National Academy of Sciences 118 (15):e1912444117.CrossRefGoogle ScholarPubMed
World Health Organization. 2021. “Health Feedback.” https://healthfeedback.org.Google Scholar
Yan, Koon-Kiu, and Gerstein, Mark. 2011. “The Spread of Scientific Information: Insights from the Web Usage Statistics in PLoS Article-Level Metrics.” PLoS One 6 (5):e19917.CrossRefGoogle ScholarPubMed
Yin, Yian, Jian Gao, Benjamin F. Jones, and Wang, Dashun. 2021. “Coevolution of Policy and Science During the Pandemic.” Science 371 (6525):128–30.CrossRefGoogle ScholarPubMed
Zuckerman, Harriet, and Merton, Robert K.. 1971. “Patterns of Evaluation in Science: Institutionalisation, Structure and Functions of the Referee System.” Minerva 9 (1):66100.CrossRefGoogle Scholar