Hostname: page-component-65f69f4695-k2rjg Total loading time: 0 Render date: 2025-06-26T18:30:20.876Z Has data issue: false hasContentIssue false

The Changed Publishing Culture of Science

Published online by Cambridge University Press:  26 May 2025

Peter Weingart*
Affiliation:
Department of Sociology, Stellenbosch University, South Africa; Department of Sociology, Bielefeld University, Germany.
Rights & Permissions [Opens in a new window]

Abstract

This article addresses the question of whether the ‘freedom of science’ (or ‘academic freedom’) is affected by the various measures of evaluation and performance measurement that have been introduced into research and higher education. The assumption is that the political, economic and technical changes that have taken place over the last three quarters of a century have had profound effects on the communication of science in general and on scientific publishing in particular. The crucial developments are: overall growth of science, the change of science policy in response to it, especially with respect to the governance of universities; the commercialization of academic publishing companies and the concentration of the journal market; the digitization of academic publishing and the capture of the process of evaluation internal to science by the publishing companies via the production of performance indicators, as well as of communication on digital platforms. All of these developments are interrelated in specific ways and, as such, unfold their effects on the publishing process and specifically on the publishing behaviour of academics. Rather than concluding that academic freedom is curtailed by evaluations and the application of performance indicators, these have led to a fundamental change of the publishing behaviour of scholars, and more generally of the culture of communication in science.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Academia Europaea

Introduction: The Freedom of Science – The Ideal Type as a Frame of Reference

During roughly the first decade and a half of the twenty-first century, concerns grew among academics who observed the overall functioning of science regarding certain developments in publishing. Scanning the pertinent studies of that time, it is apparent that the changes that were considered to be problematic concerned the institutionalized ‘communication system’ of science as it had existed since the seventeenth century, albeit on a different scale.

Publishing represents both the internal operations of science as a social system and the mediation between it and its societal environment. The borders of that system were drawn provisionally with the foundation of the Royal Society and Robert Hooke’s – one of its founders – famous delineation of the Society’s mission: ‘To improve the knowledge of natural things, and all useful Arts, Manufactures, Mechanic practices, Engines and Inventions by Experiments (not meddling with Divinity, Metaphysics, Morals, Politics, Grammar, Rhetoric or Logic)’ (citation from Ornstein Reference Ornstein1963, emphasis added). In particular, the separation from politics and religion, while never achieved completely, has remained the basic epistemic principle, a regulative norm, on which the separation of the system from its environment rests. Violation of this principle would eliminate the ‘protective shield’ of science and make its striving for objectivity and neutrality vulnerable to political corruption and censorship as well as to religious fervour.

The same principle informs the respective clauses in constitutions pertaining to the freedom of science in many countries. Nowadays, more than 170 states have ratified the United Nations’ International Covenant on Economic, Social and Cultural Rights, which protect the freedom of scientific research as a universal right and public good. Freedom of science is a core principle of the European Union and has constitutional or legal status in most EU Member States (Bonn Declaration on Freedom of Scientific Research 2020).

However, the ‘freedom of science’ as stipulated in the various national constitutions is quite general with respect to the kinds of threats, it is also not specific with respect to different functions (‘all freedoms necessary for the scientific pursuit’, Romano and Boggio Reference Romano and Boggio2020: 4) that may be threatened. Only in the Bonn Declaration on Freedom of Scientific Research is the definition of the ‘freedom of research’ connected to the freedom of opinion and the right to share, disseminate and publish research results without censorship by governments or other institutions (Bonn Declaration 2020).

In the following, the focus will be on major changes in the conditions that affect the dissemination of knowledge claims and research results. The assumption is that the political, economic and technical changes that have taken place over the last three quarters of a century have had profound effects on the communication of science. The structural dynamics that are responsible for the changes of scientific publishing are: the overall growth of science, the change of science policy in response to that growth, especially with respect to the governance of universities; the commercialization of academic publishing companies and the concentration of the journal market; and both the digitization of academic publishing and the capture of the process of evaluation internal to science by the publishing companies via the production of performance indicators, as well as of communication on digital platforms. All of these developments are interrelated in specific ways and, as such, unfold their effects on the publishing process and specifically on the publishing behaviour of academics (Weingart and Taubert (Reference Weingart and Taubert2017 [2016]). Most of these effects have been studied intensely and are therefore well known. In contrast to the many narrowly focused studies, the objective is here to trace the effects on the macro-level of science as a social system, on the organizational meso-level and on the micro-level of individual scientists as actors, in order to obtain a better understanding of the magnitude of the change. It will be shown that the lens of the various constitutional or legal provisions of the ‘freedom of science’ is insufficient to capture the development. There is no empirical evidence that the dissemination of science is curtailed by bibliometric or scientometric indicators. Rather, the implementation of these tools in the management of science and science policy, thus my thesis, has fundamentally changed the norms governing academic publishing.

Growth of the Science System

As a social system, science has grown ever since its inception in the seventeenth century. An institution that doubles in size about every 15 or 17 years will most likely change its identity and have a significant impact on its external environment. Thus, looking at the growth of global science provides a key to understanding its own changes as well as its changing role in society. However, to determine how much exactly it has grown and how to measure its growth depends on definitions of central elements (e.g., inputs, outputs) and the possibility of finding appropriate data. This is already evident in the fate of Derek de Solla Price’s prediction of 1963 that ‘scientific doomsday is […] less than a century distant,’ which was based on his analysis that science had grown five orders of magnitude by 1960 and could not grow another two orders of magnitude unless ‘we should have two scientists for every man, woman, child, and dog in the population’ (Price 1963: 19). Price noted the problem of ‘redefinitions of basic terms’ and insisted on the inevitability of saturation but also admitted that he was not sure if saturation had already set in at that time.

Given the problems of definition or even of units measured – e.g., scientists, publications such as journals, or articles – for the purpose of this article I will rely on some current data-supported studies. Price had determined an exponential growth rate of science of about 7% per annum. That was obviously too high, as the doubling time of 10 years would have meant ‘growth by a factor of 32 in 50 years and of one billion in 300 years’ (Larsen and von Ins Reference Larsen and von Ins2010: 576). These estimates are mostly calculated on the basis of research article counts. They are more easily determined than scientific personnel, since apart from the question of available data, definitions of membership vary widely (Ware and Mabe Reference Ware and Mabe2015: 37). Johnson et al. (Reference Johnson, Watkinson and Mabe2018: 32), summarizing OECD and UNESCO statistics, diagnose a 3–4% growth per year in the number of global researchers (estimated to have increased from 5.7 million in 2002 to 8.4 million in 2011), with some short-term dips during economic recessions, such as in 2009.

Turning to growth rates based on publications, Johnson et al. (Reference Johnson, Watkinson and Mabe2018: 26) state:

The number of peer reviewed journals published annually and still active had been growing at a steady rate of about 3.5% per year for over three centuries, although the growth did slightly accelerate in the post-war period 1944–78. The growth rate of 5–6% seen in the last decade is therefore significantly above the long-term trend.

Bornmann et al. (Reference Bornmann, Haunschild and Mutz2021, p. 8) on their part estimate an overall growth rate of 4.1% and a doubling time of 17.3 years.

One issue is how the growth rates of researchers and of publications are related. National Science Board data show that the number of co-authors, i.e., of average authors per paper, increased from 3.2 in 1996 to 4.4 in 2015 (Johnson et al. Reference Johnson, Watkinson and Mabe2018: 36). More recently, there has been a trend toward ‘hyper-authorship’ with 50, all the way up to 1000 authors per paper. Expanding one’s network of collaboration, ‘thus obtaining (co)authorship on a higher number of papers with the same amount of research effort’ may be a response to pressure to publish (more) (Fanelli and Larivière Reference Fanelli and Larivière2016: 8). But it may also be the result of the emergence of new kinds of instruments that require the cooperation of a large number of highly specialized experts, as in high energy physics, or geographically distributed research teams in astronomy.

Another way to account for the growth of science is to focus on expenditures for science. According to Ware and Mabe (Reference Ware and Mabe2015: 36) governments ‘see spending on R&D as critical to innovation, growth and international competitiveness [and across] the world, the average proportion of national GDP spent on R&D was about 1.7% in 2010’. Spending on R&D has, in fact, grown faster than global GDP: from US$522 billion in 1996 to US$1.9 trillion in 2015. Some 90% of these expenditures occur from North America, the EU and Asia (Johnson et al. Reference Johnson, Watkinson and Mabe2018: 29). This increase of resources explains, first of all, the growth in the number of researchers but also, though indirectly, that of research publications.

In conclusion, it may be said that, given the problem of measuring the growth of science, it cannot be determined with confidence on the systems level if the rate of publications differs significantly from the growth rate of the number of researchers, and, thus, if the freedom of dissemination has been constrained. However, the overall estimates of the growth of the system are sufficiently reliable to allow some estimates of systemic impacts.

Changes in Policy for Science and in University Governance

The growth of the science system is partly due to the changed role of science in the industrialized countries and has triggered a series of changes of the policies for science, especially since the end of the Second World War. An important element of the increased public spending was the expansion of the higher-education sector, i.e., the shift from the former elite universities to mass higher education that happened across Europe and North America starting in the late 1960s and which has become a global development, although, again, with considerable differences across countries (Calderon Reference Calderon2018). Since the 1970s, the rising costs incurred by this development initiated growing pressures on governments for efficiency-enhancing and cost-effectiveness reforms of higher-education systems.

The state’s withdrawal from detailed regulation came, above all, with the expectation of accountability (World Conference on Higher Education 1998: Art. 13b). The mandate to be accountable established two consequences: (1) strong institutional management – as a ‘key component of university governance’ universities became ‘organizational actors’ (Krücken and Meier Reference Krücken, Meier, Drori, Meyer and Hwang2006: 244 and 247); and (2) the need to produce and communicate performance measures that are generalizable and comparable across universities. Accountability was interpreted as reporting to the ‘outside publics’ (i.e., government, parliament, students, stakeholders and wider society) and led centralized university administrations to establish public relations units. This is relevant with respect to the freedom of dissemination insofar as the communication of these PR units is guided by strategic objectives of the university and, in a number of cases this has already led to conflicts between central administrations and professors (Weingart Reference Weingart2022; Väliverronen et al. Reference Väliverronen, Sihvonen, Laaksonen and Koskela2022).

The significance of this development cannot be overstated. The fundamental shift concerned public science as a social system (cf. Scott Reference Scott2013: 115). It was transformed from being largely self-referential to one that is, to a hitherto undefined degree, oriented to its external environment and open to challenges from outside. The products of universities, education and knowledge, previously almost inaccessible for assessment from outside, were opened to public scrutiny by being translated into quantitative indicators. These proxy measures allow comparisons of performance within and across universities and research laboratories globally. Indicators have not only met the mandate of accountability but have also become the basis of a new, performance-based, funding system that affects both universities and scientists. The former input-legitimacy of funding decisions was shifted to the seemingly higher output-legitimacy of the new ‘performance-based research-funding systems’ (PRFS).

The reality of the implementation of evaluation systems, quantitative indicators and PRFS reveals a similarly varied picture, as have the different types of university governance that have emerged in a host of countries. Hicks’ detailed analysis of 14 different systems in 2010 led her to conclude that ‘PRFSs were found to be complex, dynamic systems, balancing peer review and metrics, accommodating differences between fields, and involving lengthy consultation with the academic community and transparency in data and results’ (Hicks Reference Hicks2012: 260). The differences between countries (e.g., EU member states) with respect to their methods of allocating funding remain considerable, some still relying on peer review but others using quantitative metrics as their preferred method. ‘Given these variations it is difficult to give a precise assessment of the volume of research funding that is allocated through performance-based funding in the different EU Member States’ (Zacharewicz et al. Reference Zacharewicz, Lepori, Reale and Jonkers2019: 113). Nevertheless, the scope of the application of metric-based methods seems to have given sufficient reason to the EU to advise caution in using them. One of the many risks identified that are of interest in this context is the reduction of researchers’ autonomy, ‘owing to a need to conform with university management’s efforts to encourage research that will generate income from the PRFS’ (European Commission, DGRI et al. Reference Sturn, Arnold, Debackere, Spaapen and Sivertsen2018: 139). The use of performance measures for political decision-making regarding funding had a profound effect, both on the organizational and on the actors’ levels. On the actors’ level it affected the internal system of certification of knowledge claims and the creation of scholars’ reputations, as it established competition beyond the disciplinary context that was previously the frame of reference. Thus, the process of knowledge production and the ascription of reputation became at least potentially accessible to outside non-expert actors, e.g., university and government administrators and national and global media.

This is even more pronounced on the organizational level. Universities as organizations found themselves competing on what have been termed ‘quasi-markets’ that were set up by governments and funding agencies in the (neo-liberal) belief that this would make them more efficient. The scale that would, in principle, make the global system of universities comparable and serve as additional reference for performance-based funding (PBF) was created in the form of world university rankings (WURs). Universities thereby become entities in a potentially global competition. With this, the relation between the university as an organization and its members changes fundamentally, ‘from the autonomy of the individual researcher and teacher, above all the professor, to the autonomy of the university organization’ (Krücken Reference Krücken2021: 176; de Boer et al. Reference de Boer, Enders and Schimank2007; Entradas et al. Reference Entradas, Marcinkowski, Bauer and Pellegrini2023).

The gain in comparability came at a price. The numerical indicators were seemingly more ‘objective’, but they can also be ‘gamed’ more easily. Thus, the introduction of rankings may be seen, at least, as an indirect cause of changing publishing behaviour (Lee et al. Reference Lee and Schneijderberg2021: 15).

Commercialization of Publishing Companies

The growth of the science system, the rise of innovation-oriented science policy, the increasing application of new public management (NPM) and the public-relations-focused university constitute the set of driving forces defining a different institutional context for the dissemination of academic content. The agents of publishing are more directly involved. From the very beginning of modern science, publishing was a defining element of the system, i.e., the public exchange and discussion of knowledge claims as the final step in their certification. Since then, the growth and disciplinary differentiation of science have established the journal as the central mode of publication. The monograph has been prominent in the humanities only and is now losing out to the journal even there. The journal − either scholarly, technical, or trade journal − combines four functions that are central to science as a social system.

First, registration of an author’s idea to secure priority and ownership. This is crucial in the process of the ascription of reputation, which determines the social structure within the respective discipline and steers attention to relevant research. Second, dissemination of the relevant research results to the intended audience. Third, certification through quality control via peer review. Fourth, archival recording, i.e., preserving the original version of articles for ‘future reference and citation’ (Ware and Mabe Reference Ware and Mabe2015: 16).

In line with this, at least until the Second World War, journals were published mostly by scientific societies, i.e., academic organizations without commercial interests. This arrangement changed with increasing speed. By the mid-1990s, the share of commercial publishers accounted for 40% of journal output, scientific societies for 25% and university presses for 16% (Larivière et al. Reference Larivière, Haustein and Mongeon2015: 2). In 2013, only one scientific society, the American Chemical Society, was among the top five publishers with the highest number of scientific documents (in fourth place) in natural and medical sciences (NMS). The high article processing charges (APCs) that it asks (higher than those of Elsevier) demonstrate that at least a part of academia has joined the movement of commercialization (Open APC Reference Open2024). In the words of Larivière et al. (Reference Larivière, Haustein and Mongeon2015: 3−4):

In terms of numbers of papers published, the five major publishers in NMS, accounted, in 1973, for little more than 20% of all papers published. This share increased to 30% in 1996, and to 50% in 2006, the level at which it remained until 2013 when it increased again to 53%. In this domain, three publishers account for more than 47% of all papers in 2013: Reed-Elsevier (24.1%; 1.5-fold increase since 1990), Springer (11.9%; 2.9-fold increase), and Wiley-Blackwell (11.3%; 2.2-fold increase).

In essence, this development means that the vast majority of highly reputed academic journals are owned and managed under a commercial profit interest. Shareholder value is the dominating logic. The five largest publishing companies now command a firmly established oligopoly that literally dictates journal and article processing charges (APCs). All efforts on the part of universities or even national governments to (re-)gain control of this market have failed so far.

The situation is grave for science, as the system is caught in a commercial trap in several ways: First, the validity of the Ingelfinger rule, which prohibits authors from sending their manuscripts to more than one journal at the same time, gives a journal an effective monopoly on the contents of each paper it publishes. It cannot be replaced by another one (Larivière et al. Reference Larivière, Haustein and Mongeon2015: 12).

Second, since articles are the currency in evaluations and hiring decisions, scholars depend on the reputation of journals, and thus on the publishers’ efforts to manage the peer-review process and to care for appropriate ‘branding’ of the respective journal. It is nearly impossible for individual scientists to escape from the system without risking their visibility and their reputation in their respective community. That explains why the ‘open science’ movement, as an attempt to counteract this lock-in, failed (Mirowski Reference Mirowski2018).

Third, the commercial publishers have, with the advent of digitization, become data-based, data-producing and data-selling companies. As the entire publishing process is performed digitally through editorial management systems, peer review and quality control, and, thus, the flow of manuscripts is organized by the publishers. The articles are listed in data banks such as Clarivate’s Web of Science or Elsevier’s Scopus, together with citation and impact data, all of which are the basis of rankings of both individual scientists and universities. Since these data are the currency of the publishers’ business model, they have a commercial interest in sustaining the publishing process, i.e., to expand rather than limit the number of articles published in their journals.

The coupling of the scientists’ competition for reputation, the use of publication data as performance measures and the publishers’ control of these data as well as their interest in selling them has led to a new practice: so-called ‘cascading’. Articles that are rejected by one journal are transferred to another journal for possible publication. Authors are drawn into a communication and dissemination system that could be called ‘reputation management’: the first step is trying to land an article in a highly ranked journal, giving priority to the ‘journal impact factor’ (JIF) over the disciplinary community to be addressed. If the paper is rejected, they may turn to a lower-ranked journal as a second attempt, supported by the publishers’ transfer desks, and so on down the ladder of reputation. In this way, the overall volume of articles increases as publishers try to ensure that no articles will be lost. It is an open question to what extent the incentivized cascading by publishers has diluted the quality standards of academic publishing, but it hardly suggests a constraining of the dissemination of knowledge (cf. also Fyfe et al. Reference Fyfe, Coate, Curry, Lawson, Moxham and Røstvik2017: 16).

The commercial publishers have tried to dispel worries about their impact on the quality of research by promoting mechanisms of quality control, i.e., peer review. But cascading actually serves the commercially motivated publishers, keeping as many articles as possible in the system, with the effect that many papers find their way into lower-ranked journals, papers that otherwise would have been eliminated by the peer review process (Wood Reference Wood2018). This practice, while justified as being benevolent to authors, actually obscures the demarcation between sound quality control and the questionable operations of ‘predatory journals’.

In essence, the process of cascading demonstrates that large sections of the system of academic knowledge production have been captured by the commodification of the publishing industry (cf. Larivière et al. Reference Larivière, Haustein and Mongeon2015: 12). In their drive for ‘open science’, governments and research funding organizations initially favoured article processing charges (APCs) to promote open access. When they discovered that the oligopoly of publishers increased their profits rather than reduced the costs of publishing, it was too late. Attempts to regain control have failed so far (Butler et al. Reference Butler, Matthias, Simard, Mongeon and Haustein2023). Again, the risk of losing the publishing infrastructure provided by the publishing industry, foremost the reputation of branded journals, kept authors from deserting them, their immobility being differentially supported by disciplinary cultures (Severin et al. Reference Severin, Egger, Eve and Hürlimann2020). Thus, in effect, the scholarly community has become complicit with a system that corrupts its norms and values (Mirowski Reference Mirowski2018). It was lured into doing so because of, among other things, reputation races (between individuals, organizations and countries), accountability regimes and indicator-based evaluations. The ensuing competition is fired up by funding agencies, university administrations and the media.

Digitization of Communication and Publishing in Academia

Digitization has brought about several different technologies that together constitute a digital eco-system of multiple overlapping and interacting influences on organizational and individual actions in the social system of science. When looking at the effects of digitization on scholarly communication, one distinction pertains to the levels on which digitally captured and processed data affect science. The growth of the science system resulted in higher research output and, thus, larger publication volume. The effect was an abundance of scientific information services that make it possible for researchers to keep up with the progress in their respective fields.

Back in the 1950s and 1960s, research publication abstracting services only covered about 50% of the literature, with the rest being virtually invisible and untraceable. This also implied a lacuna in the internal evaluation and ascription of reputation. Here, Eugene Garfield provided an answer through indexing: ‘Rather than digesting research articles according to their semantic content as librarians had traditionally done, citation indexing enabled the organisation of articles according to the works they referenced’ (Goldenfein and Griffin Reference Goldenfein and Griffin2022: 4). This became the Science Citation Index, which lifted the evaluation practice to a higher level of abstraction and, in addition, allowed worldwide comparisons. With the subsequent construction of bibliometric indicators, the gate was opened to the introduction of university rankings and the new performance measures.

The subsequent development was closely linked to digitization, i.e., the emergence of the internet. It allowed processing the large amount of data in a relatively short time, the establishment of international bibliographic databases such as SCOPUS and Web of Science and the introduction of ‘editorial management systems’. Publishing companies morphed into data-processing companies. The collected digitized data became the basis for indicator construction. But, most importantly, it raised the level of communication in and of science to yet another stage. As the accessibility of scientific literature became globalized and accelerated over time due to the internet, processing of content was inevitably yet one more step removed from directly accessing the semantic content of research articles as scholars (and librarians) had traditionally done, rendering abstracting services even more important (Thelwall and Sud Reference Thelwall and Sud2022: 42). The cost of implementing this technology of communication could only be absorbed by commercial publishing companies, which, expectedly, subjected it to their profit-oriented business model.

Another aspect of digitization relates to information and dissemination platforms. While it appears at first glance that platforms are unrelated to publishing, they are both rooted in the commercial business model (with some exceptions, cf. Taubert Reference Taubert2018). Platforms, in particular, are nudging scholars and their organizations to compete for general attention. They serve as social media channels designed for general public communication such as Facebook, X (Twitter), etc. They have opened the gate to an outward orientation of scholarly communication. While scholars use these ‘social media’ very selectively and with interests other than academic communication, universities and research organizations routinely turn to them to communicate to the general public and in doing so capture widespread attention (Costas and Ferreira Reference Costas and Ferreira2020; Entradas et al. Reference Entradas, Marcinkowski, Bauer and Pellegrini2023). Their communication units address ‘imagined publics’ with strategic interests, being lured by the large number of followers, disregarding the platforms’ sometimes questionable reputation (Krücken Reference Krücken2024).

Apart from the social-media platforms, there are those that cater specifically to the academic world, such as Academia.edu and ResearchGate, which offer dissemination, distribution and attention-attracting services. Contrary to the impression they convey, their primary goal is not to help academics to communicate with one another, but rather ‘to monetize that communication’ (Fitzpatrick Reference Fitzpatrick2015). The mere opportunity to have one’s work seen by an incomparably large number of members not only attracts academics but also impacts their communication and publishing behaviour, namely ‘by amplifying the logic of self-branding among scholars’. They ‘are the scholarly analogues to Facebook, Instagram, and the rest’ (Duffy and Pooley Reference Duffy and Pooley2017: 2 and 8).

Similarly, the ways in which researchers use Google Scholar to disseminate and present scholarly content, nudge them ‘from their own determination of relevance, in favour of accepting intellectual topographies and hierarchies of authority that are opaquely constructed and controlled by Google Scholar’ (Goldenfein and Griffin Reference Goldenfein and Griffin2022: 23). There are also counterexamples for the provision of infrastructures, i.e., by the scientific community and public institutions such as OpenAlex and Unpaywall. These are part of a development towards open citation databases. But the still more influential commercial platforms are cases in point, again, of the interference of the logics of science and the economy that was intensified by digitization with far-reaching consequences for the production of epistemically certified knowledge and its credibility.

Changes in Publishing Behaviour

The impact of the described developments on the dissemination of scientific research results on the publishing behaviour of scholars and, thus, on the forms of publication, has been the subject of many studies covering the last four to five decades. Some of them are contradictory, many of them agree and most of them conjecture the relation between causes and effects since direct causality is almost impossible to establish (Gläser Reference Gläser2017).

De Rijke et al. (Reference de Rijke, Wouters, Rushforth, Franssen and Hammarfelt2015) identify in their 2015 review of the literature on ‘indicator use’ two main effects pertaining to researchers’ behaviour: the first is goal displacement, i.e., scoring high on the assessment criteria becomes the primary objective rather than meeting a substantive goal in research. This is, by far, the most fundamental general effect as it undermines the validity of the institutional norms of science and replaces them with an instrumental attitude. It opens the gate to calculating ‘gaming’ and possibly to the violation of rules of good scientific practice (Bonn and Pinxten Reference Bonn and Pinxten2021). The second effect is a ‘fundamental transformation of the scientific or scholarly process itself regarding the assessment criteria (for instance, by avoiding risk in selecting research topics), a transformation that may be harder to recognize’ (de Rijke et al. Reference de Rijke, Wouters, Rushforth, Franssen and Hammarfelt2015: 2).

The UK Research Assessment Exercise appears to have had a pre-emptive impact in the sense that ‘research productivity of individuals increased over time, but the effects differed across departments and individuals. Where researchers in higher-ranked programmes increased their output in higher-quality journals, researchers in lower-ranked departments aimed at increasing their publications in other outlets’ (de Rijke et al. Reference de Rijke, Wouters, Rushforth, Franssen and Hammarfelt2015: 3). Other effects regard the choice of journals (specialized versus general; high JIF versus lower JIF), the length and format of articles, the general shift from books to articles in the humanities and social sciences, a trend towards English language journals, or the numbers of co-authors (increasing) and of citations.

The various observations in different countries underscore the conclusion that the worldwide shift to increasingly standardized publication practices and individual performance measures based on quantitative bibliometric indicators has led to goal displacement in academics’ publishing behaviour. The tactics used are partly novel, in part they have existed before but have now been expanded to such a scale that they impact the publishing system as a whole. Thus, strategic behaviour is held responsible for the spread of practices that are questionable or violate the rules of scientific integrity. A prominent example is ‘salami slicing’ (Fochler and de Rijke Reference Fochler and de Rijke2017). The extent of such practices is said to range between 1% and 28% of papers, depending on the level of the plagiarism (Haustein and Larivière Reference Haustein and Larivière2015: 8). However, such numbers have to be taken with caution as circumstances change rapidly.

Akin to ‘salami slicing’ are self-citing, self-plagiarism and redundant publication (text recycling), all designed to inflate publication counts and, thus, to improve an author’s position in evaluations (Martin Reference Martin2017; Horbach and Halffmann Reference Horbach and Halffmann2019). Honorary authorships, i.e., adding researchers to the list of authors without any contribution from them, are also supposed to increase publication output and have even been commodified for a specific market. Haustein and Larivière (Reference Haustein and Larivière2015: 9) thus report that ‘in China, […] authorship on papers accepted for publication in Impact Factor journals were offered for as much as US$14,800’. Journals try to counter this practice by requiring statements of author contributions, which also shows that concerns about unethical practices are rising.

The most troubling part of this development is that the line between best practice and unethical publishing is being blurred. This is amply illustrated, for example, by the lack of consensus over the permissibility of ‘self-plagiarism’. Positions in the debate range from deeming it ‘academic misconduct’ to claiming that ‘it does not exist’ or is ‘unavoidable’ (Horbach and Halffmann Reference Horbach and Halffmann2019: 493). Likewise, there are effects of these behavioural changes in academic publishing on the overall output of papers, even though, again, no direct causal connections can be established. In December 2023, the journal Nature reported that ‘more than 10,000 research papers were retracted’ during that year, ‘a new record’, of which ‘Integrity experts say that this is only the tip of the iceberg’ (Van Noorden Reference Van Noorden2023: 479).

Conclusions: Constraints on Freedom of Dissemination or Change in the Publishing Culture of Science?

Returning to the initial question, it would simply miss the point to claim that the introduction of quantitative bibliometric performance indicators represents an infringement of the freedom of dissemination of scientific results. The changes initiated by these developments, both direct and indirect ones, are far more fundamental and encompassing. In actual fact, the introduction of these relatively novel instruments of governance represents an intervention into the operations of science, designed to open it to the public and to political administrations, by way of nudging both individual scientists and academic organizations to compete in publicly visible evaluations and rankings.

Like all interventions into complex systems, these have triggered unintended consequences. The most important among them with reference to the question at hand is the widespread goal displacement at the level of the individual actor. The independent decision of the researcher regarding how, where and when to communicate the results of his or her research is now made in an omnipresent culture of comparative self-optimization, which in the case of the social system of science means gauging one’s position, opportunities and risks when publishing not just before the audience of one’s immediate disciplinary community of peers but before a general public represented by the media, on the internet and on digital platforms.

Compared with the stipulations of the ‘freedom of science’ in the various constitutions and declarations, the forces present in today’s environment – political, economic and technical – exert an inescapable influence. The longer-term effects on the process of knowledge production, on verification processes and on the attribution of scholarly reputation as a criterion of epistemic credibility are impossible to predict.

To be sure, there are signs of resistance, such as declarations that call for a return to previous practices and warn of the detrimental consequences of the misuse of performance measures, such as DORA, the Leiden Manifesto, the Metric Tide, the Budapest Open Access Initiative and others. In addition, organizations have been founded in several countries that trace and sanction misconduct and fraud in science, such as the US Office of Research Integrity (ORI), the Danish Committees on Scientific Dishonesty and the Austrian Agency for Research Integrity, although they are perhaps just as much an indicator of a deteriorating culture of science.

But not only is it hard to imagine that science as an institution will ever be re-instituted as it existed, say, a century ago, it is also the reality that the norms and values that used to govern science and that have been encapsulated in the legal provisions of the freedom of science have been changed. The most obvious sign is the extent to which dealing with, reacting to and accepting the metrics toolbox as given is already standard practice of universities and research organizations. Probably the majority of young scholars who have been socialized in this world have adapted to it as their given academic environment. They do not recognize this as an illegitimate constraint of their freedom but as part of the complex reality in which they have to operate.

About the Author

Peter Weingart is Professor Emeritus of Sociology of Science and Science Policy at the University of Bielefeld, Germany. After his retirement 2009 he was appointed to the SARChi Chair for Science Communication, Stellenbosch University, South Africa (2015−2020). He was Director of the Institute ·of Science and Technology Studies (IWT) 1993−2009 and of the Center for Interdisciplinary Studies (ZiF), both Bielefeld University, 1989−1994. He is a member of the Berlin-Brandenburg Academy of Sciences and Humanities (BBAW) and of the Academy of Engineering Science (acatech). Since 2007 he has been editor of the journal Minerva. Current research interests are science advice in politics, science−media interrelations, trust in science. He has published numerous articles and books.

References

Bonn, A and Pinxten, W (2021) Advancing science or advancing careers? Researchers’ opinions on success indicators. PLoS ONE 16(2), e0243664. https://doi.org/10.1371/journal.pone.0243664 CrossRefGoogle Scholar
Bonn Declaration on Freedom of Scientific Research (2020) Adopted at the Ministerial Conference on the European Research Area on 20 October 2020 in Bonn. https://www.bmbf.de/SharedDocs/Downloads/files/_drp-efr-bonner_erklaerung_en_with-signatures_maerz_2021.pdf?__blob=publicationFile&v=2 Google Scholar
Bornmann, L, Haunschild, R and Mutz, R (2021) Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature. arXiv:2012.07675 [cs.DL]. https://doi.org/10.48550/arXiv.2012.07675 CrossRefGoogle Scholar
Butler, L-A, Matthias, L, Simard, M-A, Mongeon, P and Haustein, S (2023) The oligopoly’s shift to open access: how the big five academic publishers profit from article processing charges. Quantitative Science Studies 4(4), 133. https://doi.org/10.1162/qss_a_00272 CrossRefGoogle Scholar
Calderon, AJ (2018) Massification of higher education revisited. https://www.researchgate.net/publication/331521091 Google Scholar
Costas, R and Ferreira, MR (2020) A comparison of the citing, publishing, and tweeting activity of scholars on Web of Science. In Cinzia D and Glänzel W (eds), Evaluative Informetrics: The Art of Metrics Based Research Assessment. Festschrift in Honor of Henk F. Moed. Cham: Springer, pp. 261–286. https://doi.org/10.1007/978-3-030-47665-6 CrossRefGoogle Scholar
de Boer, H, Enders, J and Schimank, U (2007) On the way towards New Public Management? The governance of university systems in England, the Netherlands, Austria, and Germany. In Jansen D (ed), New Forms of Governance in Research Organizations, Cham: Springer, pp. 137–152.10.1007/978-1-4020-5831-8_5CrossRefGoogle Scholar
de Rijke, S, Wouters, PF, Rushforth, FD, Franssen, TP and Hammarfelt, B (2015) Evaluation practices and effects of indicator use—a literature review, Research Evaluation 25(2), 161169. https://doi.org/10.1093/reseval/rvv038 CrossRefGoogle Scholar
Duffy, BE and Pooley, JD (2017) ‘Facebook for academics’: The convergence of self-branding and social media logic on Academia.edu. Social Media + Society 3(1), 111. https://doi.org/10.1177/2056305117696523 CrossRefGoogle Scholar
Entradas, M, Marcinkowski, F, Bauer, MW and Pellegrini, G (2023) University central offices are moving away from doing towards facilitating science communication: a European cross-comparison. PLoS ONE 18(10), 0290504. https://doi.org/10.1371/journal.pone.0290504 CrossRefGoogle ScholarPubMed
European Commission, Directorate-General for Research and Innovation, Sturn, D, Arnold, E, Debackere, K, Spaapen, J and Sivertsen, G (2018) Mutual Learning Exercise – Performance-based funding of university research – Horizon 2020 Policy Support Facility. Brussels: Publications Office of the European Union. https://data.europa.eu/doi/10.2777/644014 Google Scholar
Fanelli, D and Larivière, V (2016) Researchers? Individual publication rate has not increased in a century. PLoS ONE 11(3), e0149504. https://doi.org/10.1371/journal.pone.0149504 CrossRefGoogle Scholar
Fitzpatrick, K (2015) Academia, not edu. https://kfitz.info/academia-not-edu/ (accessed 3 March 2014).Google Scholar
Fochler, M and de Rijke, S (2017) Implicated in the indicator game? An experimental debate, Engaging Science, Technology, and Society 3, 21−40. https://doi.org/10.17351/ests2017.108 CrossRefGoogle Scholar
Fyfe, A, Coate, K, Curry, S, Lawson, S, Moxham, N and Røstvik, CM (2017) Untangling academic publishing: a history of the relationship between commercial interests, academic prestige and the circulation of research. Discussion Paper. University of St Andrews. https://doi.org/10.5281/zenodo.546100 CrossRefGoogle Scholar
Gläser, J (2017) A fight on epistemological quicksand: comment on the dispute between van den Besselaar et al. and Butler, Journal of Informetrics 11(3), 927–932. https://doi.org/10.1016/j.joi.2017.05.019 CrossRefGoogle Scholar
Goldenfein, J and Griffin, D (2022) Google Scholar – platforming the scholarly economy. Internet Policy Review 11(3). https://doi.org/10.14763/2022.3.1671 CrossRefGoogle Scholar
Haustein, S and Larivière, V (2015) The use of bibliometrics for assessing research: possibilities, limitations and adverse effects. In Welpe IM, Wollersheim J, Ringelhan S and Osterloh M (eds), Incentives and performance: Governance of knowledge-intensive organizations. Cham: Springer, pp. 121−139.10.1007/978-3-319-09785-5_8CrossRefGoogle Scholar
Hicks, D (2012) Performance-based university research funding systems. Research Policy 41(2), 251261. https://doi.org/10.1016/j.respol.2011.09.007 CrossRefGoogle Scholar
Horbach, S and Halffmann, W (2019) The extent and causes of academic text recycling or ‘self-plagiarism’. Research Policy 48(2), 492502.10.1016/j.respol.2017.09.004CrossRefGoogle Scholar
Johnson, R, Watkinson, A and Mabe, M (2018) The STM Report. An Overview of Scientific and Scholarly Publishing. The Hague: International Association of Scientific, Technical and Medical Publishers.Google Scholar
Krücken, G (2021) Multiple competitions in higher education: a conceptual approach. Innovation 23(2), 163181. https://doi.org/10.1080/14479338.2019.1684652 CrossRefGoogle Scholar
Krücken, G (2024) Imagined publics – on the structural transformation of higher education and science. A post-Habermas perspective. Philosophy and Social Criticism 50(1), 141158. https://doi.org/10.1177/01914537231203544 CrossRefGoogle Scholar
Krücken, G and Meier, F (2006) Turning the university into an organizational actor. In Drori, GS, Meyer, JW and Hwang, H (eds), Globalization and Organization: World Society and Organizational Change. Oxford: Oxford University Press, pp. 241257.10.1093/oso/9780199284535.003.0011CrossRefGoogle Scholar
Larivière, V, Haustein, S and Mongeon, P (2015) The oligopoly of academic publishers in the digital era. PLoS ONE 10(6). https://doi.org/10.1371/journal.pone.0127502 CrossRefGoogle ScholarPubMed
Larsen, PO and von Ins, M (2010) The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index, Scientometrics 84(3), 575603. https://doi.org/10.1007/s11192-010-0202-z CrossRefGoogle ScholarPubMed
Lee, SJ, Schneijderberg, C, Kim Y and Steinhardt I (2021) Have academics’ citation patterns changed in response to the rise of World University Rankings? A test using first-citation speeds. Sustainability 13, Article 9515. https://doi.org/10.3390/su13179515 CrossRefGoogle Scholar
Martin, B (2017) When social scientists disagree: comments on the Butler-van den Besselaar debate. Journal of Informetrics 11, pp. 937940 10.1016/j.joi.2017.05.021CrossRefGoogle Scholar
Mirowski, P. (2018). The future(s) of Open Science. Social Studies of Science 48(2), 171203. https://doi.org/10.1177/0306312718772086 CrossRefGoogle ScholarPubMed
Ornstein, M (1963) The Role of Scientific Societies in the Seventeenth Century. Reprint of the third edition of 1938. London: Archon Books.Google Scholar
Price DJ de S (1963) Little Science, Big Science. New York: Columbia University Press.Google Scholar
Romano, CPR and Boggio, A (2020) Right to science. Max Planck Encyclopedia of Comparative Constitutional Law. https://oxcon.ouplaw.com/home/mpeccol Google Scholar
Scott, P (2013) Rankings and online learning: a disruptive combination for higher education? In Marope PTM, Wells PJ and Hazelkorn E (eds), Rankings and Accountability in Higher Education Uses and Misuses. Paris: UNESCO Publishing, pp. 113–128.Google Scholar
Severin, A, Egger, M, Eve, MP and Hürlimann, D (2020) Discipline-specific Open Access publishing practices and barriers to change: an evidence-based review. 7 F1000Res. 2018 Dec 11;7:1925. https://doi.org/10.12688/f1000research.17328.2 CrossRefGoogle Scholar
Taubert, N (2018) Open infrastructure and community: the case of astronomy. JCOM 17(2), C02. https://doi.org/10.22323/2.17020302 CrossRefGoogle Scholar
Thelwall, M and Sud, P (2022) Scopus 1900–2020: growth in articles, abstracts, countries, fields, and journals. Quantitative Science Studies 3(1), 3750. https://doi.org/10.1162/qss_a_00177 CrossRefGoogle Scholar
Väliverronen, E, Sihvonen, T, Laaksonen, S-M and Koskela, M (2022) Branding the ‘wow-academy’: the risks of promotional culture and quasi-corporate communication in higher education. Studies in Communication Sciences, 22(3), 493514.Google Scholar
Van Noorden, R (2023) More than 10,000 research papers were retracted in 2023 – a new record. Nature 624, 479481.10.1038/d41586-023-03974-8CrossRefGoogle ScholarPubMed
Ware, M and Mabe, M (2015) The STM Report. An Overview of Scientific and Scholarly Journal Publishing. The Hague: International Association of Scientific, Technical and Medical Publishers.Google Scholar
Weingart, P (2022) Trust or attention? Medialization of science revisited. Public Understanding of Science 31(3), 288296. https://doi.org/10.1177/09636625211070888 CrossRefGoogle ScholarPubMed
Weingart, P and Taubert, N (2017 [2016]) Changes in scientific publishing: a heuristic for analysis. In Weingart P and Taubert N (eds), The Future of Scholarly Publishing. Open Access and the Economics of Digitisation. Cape Town: African Minds, pp. 1−36. (First published in 2016 in Berlin by de Gruyter.)10.47622/9781928331537CrossRefGoogle Scholar
Wood, A (2018) Cascade journals: what and why? The Wiley Network. https://www.wiley.com/en-us/network/publishing/research-publishing/editors/cascade-journals-what-and-why (accessed 26 February 2024).Google Scholar
World Conference on Higher Education (1998) World Declaration on Higher Education for the Twenty-first Century: Vision and Action and Framework for Priority Action for Change and Development in Higher Education, adopted by the World Conference on Higher Education: Higher Education in the Twenty-first Century, Vision and Action, 9 October 1998. https://unesdoc.unesco.org/ark:/48223/pf0000141952 (accessed 18 March 2024).Google Scholar
Zacharewicz, T, Lepori, B, Reale, E and Jonkers, K (2019) Performance-based research funding in EU Member States—a comparative assessment. Science and Public Policy 46(1), 105115. https://doi.org/10.1093/scipol/scy041 CrossRefGoogle Scholar