Hostname: page-component-78c5997874-fbnjt Total loading time: 0 Render date: 2024-11-10T08:07:54.126Z Has data issue: false hasContentIssue false

The integrity of the research record: a mess so big and so deep and so tall

Published online by Cambridge University Press:  25 May 2022

William Lee*
Affiliation:
Cornwall Partnership NHS Foundation Trust, Cornwall, UK; and University of Exeter, Exeter, Devon, UK
Patricia Casey
Affiliation:
University College Dublin, Ireland
Norman Poole
Affiliation:
St George's Hospital, London, UK
Kenneth R. Kaufman
Affiliation:
Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey, USA
Stephen M. Lawrie
Affiliation:
University of Edinburgh, UK
Gin Malhi
Affiliation:
University of Sydney, Australia
Eva Petkova
Affiliation:
NYU Grossman School of Medicine, New York University, New York, USA
Najma Siddiqi
Affiliation:
University of York, UK
Kamaldeep Bhui
Affiliation:
University of Oxford, UK
*
Correspondence: William Lee. Email: w.lee@exeter.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Summary

Poor research integrity is increasingly recognised as a serious problem in science. We outline some evidence for this claim and introduce the Royal College of Psychiatrists (RCPsych) journals’ Research Integrity Group, which has been created to address this problem.

Type
Editorial
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press on behalf of the Royal College of Psychiatrists

The extent of the problem

Many research findings are incorrect, even if the studies are carried out completely correctly and all are published.Reference Ioannidis1

Sadly, only about half of all registered clinical trials are ever publishedReference Song, Parekh, Hooper, Loke, Ryder and Sutton2 and the published half may well overstate benefits and understate harms. These figures are unknown for observational studies, but are likely to be worse because of the lower costs and the lack of requirement for registration.

‘Questionable research practices’ (QRPs) are common, including selective reporting, outcome switching, ‘p-hacking’, the seemingly low-consequence ‘gift authorship’ and the fabrication of data.Reference Gerrits, Jansen, Mulyanto, van den Berg, Klazinga and Kringos3 These exist because of the tendency for journals to accept papers with eye-catching results over those with more moderate claims.Reference Siontis, Evangelou and Ioannidis4 Thus, published research reflects more the criteria required to achieve publication than scientific accuracy. These problems extend to systematic reviews, which carry the double burden of summarising an already compromised literature and being subject to these forces themselves.

Research fraud has been detected in about 5% of papers, just from examining figures in the papers themselves.Reference Carlisle5 In our view, is likely that only a minority of research fraud is detectable this way, so the total figure may be higher. Recent examples, since retracted, include a paper of a trial of hydroxychloroquine for COVID-19 published in The Lancet and a paper about risk factors for mortality in COVID-19 published in the NEJM. Less recently, but still importantly because of its significant ongoing public health harms and its mental health subject matter, is a paper falsely linking certain vaccinations given to children in the UK to the causation of autism. This was retracted from The Lancet after 11 years of deliberation.

The pathway to the current problem

How did we get here? There is strong competition for advancement in academia: only 0.5% of people in the UK with PhDs ever become professors and in the USA only 12.8% achieve academic tenure. Academics and institutions are harshly judged on their outputs against others using dubious measures of research quality such as the impact factors of the journals in which they publish their work. Careers and livelihoods depend on doing well on such metrics. Striking, positive findings increase the probability of a paper being accepted by such journals.Reference Siontis, Evangelou and Ioannidis4 Alongside this, journals are similarly in competition to publish work which will be cited well. Thus, papers in prestigious periodicals have a higher risk of retraction than papers published elsewhere, when one might expect a prestigious journal to have greater pre-publication scrutiny.

Yet institutional leadership at universities and learned societies persists in practices that generate and maintain these perverse incentives, which slow scientific progress, offer poor value for money for taxpayers and other funders, and ultimately harm our patients. Recently, mercifully, aggressive research environments have been identified as needing attention and reform.

Thus, there are strong incentives for researchers to do wrong rather than right. Some researchers have left the sector, disillusioned and morally injured, after realising that to succeed in their careers, they need to undermine the very values that led them to become researchers in the first place.

Altogether, these issues have very serious consequences for the state of knowledge and the credibility of all scientific output. This has an impact on the general population, being the ultimate consumers of the results of research and healthcare informed by those results.

The RCPsych's response

So, what can a single learned society and a small group of scientific journals do to address this enormous problem?

We must accept the pervasiveness and importance of poor research integrity. Accordingly, we have created a Research Integrity Group of the Editorial Boards of the BJPsych, BJPsych Open and BJPsych Bulletin, primarily to oversee investigations following allegations of poor research integrity in the journals’ published output. We are making this available as a resource to BJPsych Advances and BJPsych International as well. The work of our group will be of value generally to the College's activities, especially where there are disputes of evidence.

At a system level, we have tightened the initial checks on submitted articles and research papers. All the details of trials are now carefully checked, including that the ethics permission and pre-registration were complete before recruitment began. Automated plagiarism screening is now undertaken on every submission as well. Evidence of outcome switching or poor adherence to protocols results in rejection, whether these become apparent during the initial checks or later. We also now support the use of preprint servers. Within this, we are careful to preserve the pluralism of approaches necessary in our specialty and to draw distinctions between scientific weaknesses, which are inevitable, and poor practice, which is not.

A reasonable concern is that any paper rejected by the RCPsych journals may still be published elsewhere. However, we cannot control that, and high standards are necessary to ensure quality publications in our journals and to lead the development of scientific publication.

As well as serving to oversee investigations following expressions of concern over research integrity matters and working to improve journal and College processes, the Research Integrity Group is linking with other efforts to improve research integrity: we have met with prominent campaigners and managers from other journals to look beyond the immediate problems to reinforce efforts to tackle the perverse incentives that create, nourish, tolerate and protect poor research practices across our sector.

We are trusted to care for people often at the lowest points in their lives. We must bring the best evidence to bear on their individual difficulties. By making those areas over which we have influence as good as they can be, we can honour the motto of our College and ‘Let Wisdom Guide’.

Supplementary material

Further reading can be found in the supplementary material, available online at https://doi.org/10.1192/bjp.2022.74.

Data availability

Data availability is not applicable to this article as no new data were created or analysed in this study.

Author contributions

W.L. had the idea and drafted the initial manuscript. All the authors contributed to its further development, both before and after peer review, and all have seen and approved the final version.

Funding

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Declaration of interest

W.L. is a Deputy Editor of the BJPsych; P.C. is Editor-in-Chief of BJPsych Advances; N.P. is Editor-in-Chief of BJPsych Bulletin; K.K. is Editor-in-Chief of BJPsych Open; S.L. is an editorial board member of BJPsych; G.M. is a Deputy Editor of BJPsych; E.P. is a Deputy Editor of BJPsych Open; N.S. is an Associate Editor of BJPsych; and K.B. is Editor-in-Chief of BJPsych. None of the authors took part in the review or decision-making process of this paper.

Footnotes

All nine authors are members of the RCPsych's Research Integrity Group.

References

Ioannidis, JPA. Why most published research findings are false. PLoS Med 2005; 2: e124.CrossRefGoogle ScholarPubMed
Song, F, Parekh, S, Hooper, L, Loke, YK, Ryder, J, Sutton, AJ, et al. Dissemination and publication of research findings : an updated review of related biases. Health Technol Assess 2010; 14(8): 1–220.CrossRefGoogle ScholarPubMed
Gerrits, RG, Jansen, T, Mulyanto, J, van den Berg, MJ, Klazinga, NS, Kringos, DS. Occurrence and nature of questionable research practices in the reporting of messages and conclusions in international scientific health services research publications: a structured assessment of publications authored by researchers in the Netherlands. BMJ Open 2019; 9: e027903.CrossRefGoogle ScholarPubMed
Siontis, KCM, Evangelou, E, Ioannidis, JPA. Magnitude of effects in clinical trials published in high-impact general medical journals. Int J Epidemiol 2011; 40: 1280–91.CrossRefGoogle ScholarPubMed
Carlisle, JB. The analysis of 168 randomised controlled trials to test data integrity. Anaesthesia 2012; 67: 521–37.CrossRefGoogle ScholarPubMed
Supplementary material: File

Lee et al. supplementary material

Lee et al. supplementary material

Download Lee et al. supplementary material(File)
File 17.9 KB
Submit a response

eLetters

No eLetters have been published for this article.