Hostname: page-component-78c5997874-xbtfd Total loading time: 0 Render date: 2024-11-10T13:41:04.515Z Has data issue: false hasContentIssue false

Publication bias in ecology and evolution: an empirical assessment using the ‘trim and fill’ method

Published online by Cambridge University Press:  05 June 2002

MICHAEL D. JENNIONS
Affiliation:
School of Botany and Zoology, Australian National University, Canberra, A.C.T. 0200, Australia Smithsonian Tropical Research Institute, Unit 0948, APO AA 34002-0948, USA
ANDERS P. MØLLER
Affiliation:
Laboratoire d'Ecologie Evolutive Parasitaire, CNRS FRE 2365, Universite Pierre et Marie Curie, 7, quai St. Bernard, Case 237, F-75252 Paris Cedex 5, France
Get access

Abstract

Recent reviews of specific topics, such as the relationship between male attractiveness to females and fluctuating asymmetry or attractiveness and the expression of secondary sexual characters, suggest that publication bias might be a problem in ecology and evolution. In these cases, there is a significant negative correlation between the sample size of published studies and the magnitude or strength of the research findings (formally the ‘effect size’). If all studies that are conducted are equally likely to be published, irrespective of their findings, there should not be a directional relationship between effect size and sample size; only a decrease in the variance in effect size as sample size increases due to a reduction in sampling error. One interpretation of these reports of negative correlations is that studies with small sample sizes and weaker findings (smaller effect sizes) are less likely to be published. If the biological literature is systematically biased this could undermine the attempts of reviewers to summarise actual biology relationships by inflating estimates of average effect sizes. But how common is this problem? And does it really affect the general conclusions of literature reviews? Here, we examine data sets of effect sizes extracted from 40 peer-reviewed, published meta-analyses. We estimate how many studies are missing using the newly developed ‘trim and fill’ method. This method uses asymmetry in plots of effect size against sample size (‘funnel plots’) to detect ‘missing’ studies. For random-effect models of meta-analysis 38% (15/40) of data sets had a significant number of ‘missing’ studies. After correcting for potential publication bias, 21% (8/38) of weighted mean effects were no longer significantly greater than zero, and 15% (5/34) were no longer statistically robust when we used random-effects models in a weighted meta-analysis. The mean correlation between sample size and the magnitude of standardised effect size was also significantly negative (rs =−0·20, P<0·0001). Individual correlations were significantly negative (P<0·10) in 35% (14/40) of cases. Publication bias may therefore affect the main conclusions of at least 15–21% of meta-analyses. We suggest that future literature reviews assess the robustness of their main conclusions by correcting for potential publication bias using the ‘trim and fill’ method.

Type
Review Article
Copyright
Cambridge Philosophical Society 2002

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)