We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Scientists have started to explore whether novel artificial intelligence (AI) tools based on large language models, such as GPT-4, could support the scientific peer review process. We sought to understand (i) whether AI versus human reviewers are able to distinguish between made-up AI-generated and human-written conference abstracts reporting on actual research, and (ii) how the quality assessments by AI versus human reviewers of the reported research correspond to each other. We conducted a large-scale field experiment during a medium-sized scientific conference, relying on 305 human-written and 20 AI-written abstracts that were reviewed either by AI or 217 human reviewers. The results show that human reviewers and GPTZero were better in discerning (AI vs. human) authorship than GPT-4. Regarding quality assessments, there was rather low agreement between both human–human and human–AI reviewer pairs, but AI reviewers were more aligned with human reviewers in classifying the very best abstracts. This indicates that AI could become a prescreening tool for scientific abstracts. The results are discussed with regard to the future development and use of AI tools during the scientific peer review process.
Blind review is ubiquitous in contemporary science, but there is no consensus among stakeholders and researchers about when or how much or why blind review should be done. In this essay, we explain why blinding enhances the impartiality and credibility of science while also defending a norm according to which blind review is a baseline presumption in scientific peer review.
As the scientific community becomes aware of low replicability rates in the extant literature, peer-reviewed journals have begun implementing initiatives with the goal of improving replicability. Such initiatives center around various rules to which authors must adhere to demonstrate their engagement in best practices. Preliminary evidence in the psychological science literature demonstrates a degree of efficacy in these initiatives. With such efficacy in place, it would be advantageous for other fields of behavioral sciences to adopt similar measures. This letter provides a discussion on lessons learned from psychological science while similarly addressing the unique challenges of other sciences to adopt measures that would be most appropriate for their field. We offer broad considerations for peer-reviewed journals in their implementation of specific policies and recommend that governing bodies of science prioritize the funding of research that addresses these measures.
Research articles in the clinical and translational science literature commonly use quantitative data to inform evaluation of interventions, learn about the etiology of disease, or develop methods for diagnostic testing or risk prediction of future events. The peer review process must evaluate the methodology used therein, including use of quantitative statistical methods. In this manuscript, we provide guidance for peer reviewers tasked with assessing quantitative methodology, intended to complement guidelines and recommendations that exist for manuscript authors. We describe components of clinical and translational science research manuscripts that require assessment including study design and hypothesis evaluation, sampling and data acquisition, interventions (for studies that include an intervention), measurement of data, statistical analysis methods, presentation of the study results, and interpretation of the study results. For each component, we describe what reviewers should look for and assess; how reviewers should provide helpful comments for fixable errors or omissions; and how reviewers should communicate uncorrectable and irreparable errors. We then discuss the critical concepts of transparency and acceptance/revision guidelines when communicating with responsible journal editors.
Peer review is supposed to ensure that published work, in philosophy and in other disciplines, meets high standards of rigor and interest. But many people fear that it no longer is fit to play this role. This Element examines some of their concerns. It uses evidence that critics of peer review sometimes cite to show its failures, as well as empirical literature on the reception of bullshit, to advance positive claims about how the assessment of scholarly work is appropriately influenced by features of the context in which it appears: for example, by readers' knowledge of authorship or of publication venue. Reader attitude makes an appropriate and sometimes decisive difference to perceptions of argument quality. This Element finishes by considering the difference that author attitudes to their own arguments can appropriately make to their reception. This title is also available as Open Access on Cambridge Core.
In the years following FDA approval of direct-to-consumer, genetic-health-risk/DTCGHR testing, millions of people in the US have sent their DNA to companies to receive personal genome health risk information without physician or other learned medical professional involvement. In Personal Genome Medicine, Michael J. Malinowski examines the ethical, legal, and social implications of this development. Drawing from the past and present of medicine in the US, Malinowski applies law, policy, public and private sector practices, and governing norms to analyze the commercial personal genome sequencing and testing sectors and to assess their impact on the future of US medicine. Written in relatable and accessible language, the book also proposes regulatory reforms for government and medical professionals that will enable technological advancements while maintaining personal and public health standards.
In the years following FDA approval of direct-to-consumer, genetic-health-risk/DTCGHR testing, millions of people in the US have sent their DNA to companies to receive personal genome health risk information without physician or other learned medical professional involvement. In Personal Genome Medicine, Michael J. Malinowski examines the ethical, legal, and social implications of this development. Drawing from the past and present of medicine in the US, Malinowski applies law, policy, public and private sector practices, and governing norms to analyze the commercial personal genome sequencing and testing sectors and to assess their impact on the future of US medicine. Written in relatable and accessible language, the book also proposes regulatory reforms for government and medical professionals that will enable technological advancements while maintaining personal and public health standards.
This chapter covers the appraisal of published and unpublished works in fiction and non-fiction, prose and poetry, in single volumes, monographs, series and collections. These works are intended, for the most part, to be published in book formats or in formal journal publications, in print, electronically and online.
To survive and prosper, researchers must demonstrate a successful record of publications in journals well-regarded by their fields. This chapter discusses how to successfully publish research in journals in the social and behavioral sciences and is organized into four sections. The first section highlights important factors that are routinely involved in the process of publishing a paper in refereed journals. The second section features some factors that are not necessarily required to publish a paper but that, if present, can positively influence scientific productivity. The third section discusses some pitfalls scholars should avoid to protect their scientific career. The last section addresses general publication issues within the science community. We also recommend further resources for those interested in learning more about successfully publishing research.
The peer review process of publication has limitations, which are discussed. The influence of the pharmaceutical industry can be beneficial and harmful, both of which are examined.
Despite many flaws, including variable quality and a lack of universal standards, peer review – the formal process of critically assessing knowledge claims prior to publication – remains a bedrock norm of science. It therefore also underlies the scientific authority of the IPCC. Most literature used in IPCC assessments has already been peer reviewed by scientific journals. IPCC assessments are themselves reviewed at multiple stages of composition, first by Lead Authors, then by scientific experts and non-governmental organisations outside the IPCC, and finally by government representatives. Over time, assessment review has become increasingly inclusive and transparent: anyone who claims expertise may participate in review, and all comments and responses are published after the assessment cycle concludes. IPCC authors are required to respond to all comments. The IPCC review process is the most extensive, open, and inclusive in the history of science. Challenges include how to manage a huge and ever-increasing number of review comments, and how to deal responsibly with review comments that dispute the fundamental framing of major issues.
Head and neck (HN) radiotherapy (RT) is complex, involving multiple target and organ at risk (OAR) structures delineated by the radiation oncologist. Site-agnostic peer review after RT plan completion is often inadequate for thorough review of these structures. In-depth review of RT contours is critical to maintain high-quality RT and optimal patient outcomes.
Materials and Methods:
In August 2020, the HN RT Quality Assurance Conference, a weekly teleconference that included at least one radiation oncology HN specialist, was activated at our institution. Targets and OARs were reviewed in detail prior to RT plan creation. A parallel implementation study recorded patient factors and outcomes of these reviews. A major change was any modification to the high-dose planning target volume (PTV) or the prescription dose/fractionation; a minor change was modification to the intermediate-dose PTV, low-dose PTV, or any OAR. We analysed the results of consecutive RT contour review in the first 20 months since its initiation.
Results:
A total of 208 patients treated by 8 providers were reviewed: 86·5% from the primary tertiary care hospital and 13·5% from regional practices. A major change was recommended in 14·4% and implemented in 25 of 30 cases (83·3%). A minor change was recommended in 17·3% and implemented in 32 of 36 cases (88·9%). A survey of participants found that all (n = 11) strongly agreed or agreed that the conference was useful.
Conclusion:
Dedicated review of RT targets/OARs with a HN subspecialist is associated with substantial rates of suggested and implemented modifications to the contours.
Peer review is an essential quality assurance component of radiation therapy planning. A growing body of literature has demonstrated substantial rates of suggested plan changes resulting from peer review. There remains a paucity of data on the impact of peer review rounds for stereotactic body radiation therapy (SBRT). We therefore aim to evaluate the outcomes of peer review in this specific patient cohort.
Methods and materials:
We conducted a retrospective review of all SBRT cases that underwent peer review from July 2015 to June 2018 at a single institution. Weekly peer review rounds are grouped according to cancer subsite and attended by radiation oncologists, medical physicists and medical radiation technologists. We prospectively compiled ‘learning moments’, defined as cases with suggested changes or where an educational discussion occurred beyond routine management, and critical errors, defined as errors which could alter clinical outcomes, recorded prospectively during peer review. Plan changes implemented after peer review were documented.
Results:
Nine hundred thirty-four SBRT cases were included. The most common treatment sites were lung (518, 55%), liver (196, 21%) and spine (119, 13%). Learning moments were identified in 161 cases (17%) and translated into plan changes in 28 cases (3%). Two critical errors (0.2%) were identified: an inadequate planning target volume margin and an incorrect image set used for contouring. There was a statistically significantly higher rate of learning moments for lower-volume SBRT sites (defined as ≤30 cases/year) versus higher-volume SBRT sites (29% vs 16%, respectively; p = 0.001).
Conclusions:
Peer review for SBRT cases revealed a low rate of critical errors, but did result in implemented plan changes in 3% of cases, and either educational discussion or suggestions of plan changes in 17% of cases. All SBRT sites appear to benefit from peer review, though lower-volume sites may require particular attention.
There is currently a heightened need for transparency in pharmaceutical sectors. The inclusion of real-world (RW) evidence, in addition to clinical trial evidence, in decision-making processes, was an important step forward toward a more inclusive established value proposition. This advance has introduced new transparency challenges. Increasing transparency is a critical step toward accelerating improvement in type, quality, and access to data, regardless of whether these originate from clinical trials or from RW studies. However, so far, advances in transparency have been relatively restricted to clinical trials, and there remains a lack of similar expectations or standards of transparency concerning the generation and reporting of RW data. This perspective paper aims to highlight the need for transparency concerning RW studies, data, and evidence across health care sectors, to identify areas for improvement, and provide concrete recommendations and practices for the future. Specific issues are discussed from different stakeholder perspectives, culminating in recommended actions, from individual stakeholder perspectives, for improved RW study, data, and evidence transparency. Furthermore, a list of potential guidelines for consideration by stakeholders is proposed. While recommendations from different stakeholder perspectives are made, true transparency in the processes involved in the generation, reporting, and use of RW evidence will require a concerted effort from all stakeholders across health care sectors.
The COVID-19 pandemic exacerbated gender disparities in some academic disciplines. This study examined the association of the pandemic with gender authorship disparities in clinical neuropsychology (CN) journals.
Method:
Author bylines of 1,018 initial manuscript submissions to four major CN journals from March 15 through September 15 of both 2019 and 2020 were coded for binary gender. Additionally, authorship of 40 articles published on pandemic-related topics (COVID-19, teleneuropsychology) across nine CN journals were coded for binary gender.
Results:
Initial submissions to these four CN journals increased during the pandemic (+27.2%), with comparable increases in total number of authors coded as either women (+23.0%) or men (+25.4%). Neither the average percentage of women on manuscript bylines nor the proportion of women who were lead and/or corresponding authors differed significantly across time. Moreover, the representation of women as authors of pandemic-related articles did not differ from expected frequencies in the field.
Conclusions:
Findings suggest that representation of women as authors of peer-reviewed manuscript submissions to some CN journals did not change during the initial months of the COVID-19 pandemic. Future studies might examine how risk and protective factors may have influenced individual differences in scientific productivity during the pandemic.
Peer review of searches is a process whereby both the search strategies and the search process description are reviewed, ideally using an evidence-based checklist.
Rationale
As the search strategy underpins any well-conducted evidence synthesis, its quality could affect the final result. Evidence shows, however, that search strategies are prone to error.
Findings
There is increasing awareness and use of the PRESS Evidence-Based Checklist and peer review of search strategies, at the outset of evidence syntheses, prior to the searches being run, and this is now recommended by a number of evidence synthesis organizations.
Recommendations and conclusions
Searches for evidence syntheses should be peer reviewed by a suitably qualified and experienced librarian or information specialist after being designed, ideally, by another suitably qualified and experienced librarian or information specialist. Peer review of searches should take place at two important stages in the evidence synthesis process; at the outset of the project prior to the searches being run and at the prepublication stage. There is little empirical evidence, however, to support the effectiveness of peer review of searches. Further research is required to assess this. Those wishing to stay up to date with the latest developments in information retrieval, including peer review of searches, should consult the SuRe Info resource (http://www.sure-info.org), which seeks to help information specialists and others by providing easy access to the findings from current information retrieval methods research and thus support more research-based information retrieval practice.