Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-26T18:14:24.608Z Has data issue: false hasContentIssue false

21 - Meta-analysis: Assessing Heterogeneity Using Traditional and Contemporary Approaches

from 3 - Methods for Understanding Consumer Psychology

Published online by Cambridge University Press:  30 March 2023

Cait Lamberton
Affiliation:
Wharton School, University of Pennsylvania
Derek D. Rucker
Affiliation:
Kellogg School, Northwestern University, Illinois
Stephen A. Spiller
Affiliation:
Anderson School, University of California, Los Angeles
Get access

Summary

A meta-analysis is a statistical analysis that combines and contrasts two or more studies of a common phenomenon. Its emphasis is on the quantification of the heterogeneity in effects across studies, the identification of moderators of this heterogeneity, and the quantification of the association between such moderators and effects. Given this, and in line with the growing appreciation for and embracement of heterogeneity in psychological research as not a nuisance but rather a boon for advancing theory, gauging generalizability, identifying moderators and boundary conditions, and assisting in future study planning, we make the assessment of heterogeneity the focus of this chapter. Specifically, we illustrate the assessment of heterogeneity as well as the advantages offered by contemporary approaches to meta-analysis relative to the traditional approach for the assessment of heterogeneity via two case studies. Following our case studies, we review several important considerations relevant to meta-analysis and then conclude with a brief summation.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2023

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Airy, G. B. (1861). On the Algebraical and Numerical Theory of Errors of Observations and the Combination of Observations. Macmillan & Company.Google Scholar
Amrhein, V., Greenland, S., & McShane, B. (2019). Scientists rise up against statistical significance. Nature, 567(7748), 305307.Google Scholar
Baguley, T. (2009). Standardized or simple effect size: What should be reported? British Journal of Psychology, 100(3), 603617.CrossRefGoogle ScholarPubMed
Baribault, B., Donkin, C., Little, D. R., et al. (2018). Meta-studies for robust tests of theory. Proceedings of the National Academy of Sciences, 115(11), 26072612.CrossRefGoogle Scholar
Berkey, C. S., Hoaglin, D. C., Antczak-Bouckoms, A., Mosteller, F., & Colditz, G. A. (1998). Meta-analysis of multiple outcomes by regression with random effects. Statistics in Medicine, 17(22), 25372550.3.0.CO;2-C>CrossRefGoogle ScholarPubMed
Berlin, J. (1995). Benefits of heterogeneity in meta-analysis of data from epidemiologic studies. American Journal of Epidemiology, 142(383–387), 7625402.Google Scholar
BondJr., C. F., Wiitala, W. L., & Richard, F. D. (2003). Meta-analysis of raw mean differences. Psychological Methods, 8(4), 406.CrossRefGoogle ScholarPubMed
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to Meta-analysis. Wiley.Google Scholar
Brunner, J., & Schimmack, U. (2016). How replicable is psychology? a comparison of four methods of estimating replicability on the basis of test statistics in original studies. www.utstat.utoronto.ca/~brunner/zcurve2016/HowReplicable.pdfGoogle Scholar
Bryan, C. J., Tipton, E., & Yeager, D. S. (2021). Behavioural science is unlikely to change the world without a heterogeneity revolution. Nature Human Behaviour, 5(8), 980989.CrossRefGoogle ScholarPubMed
Cheung, M. W.-L. (2015). Meta-Analysis: A Structural Equation Modeling Approach. Wiley.CrossRefGoogle Scholar
Chung, Y., Rabe-Hesketh, S., & Choi, I.-H. (2013). Avoiding zero between-study variance estimates in random-effects meta-analysis. Statistics in Medicine, 32(23), 40714089.CrossRefGoogle ScholarPubMed
Chung, Y., Rabe-Hesketh, S., Dorie, V., Gelman, A., & Liu, J. (2013). A non-degenerate estimator for hierarchical variance parameters via penalized likelihood estimation. Psychometrika, 78(4), 685709.Google Scholar
Cooper, H., Hedges, L. V., & Valentine, J. C. (2019). The Handbook of Research Synthesis and Meta-analysis. Russell Sage Foundation.CrossRefGoogle Scholar
DeKay, M. L., Rubinchik, N., Li, Z., & De Boeck, P. (2022). Accelerating Psychological Science With Metastudies: A Demonstration Using the Risky-Choice Framing Effect. Perspectives on Psychological Science, 17(6), 17041736. https://doi.org/10.1177/17456916221079611.Google Scholar
Dijksterhuis, A. (2004). Think different: The merits of unconscious thought in preference development and decision making. Journal of Personality and Social Psychology, 87(5), 586.Google Scholar
Gelman, A. (2015). The connection between varying treatment effects and the crisis of unreplicable research: A bayesian perspective. Journal of Management, 41(2), 632643.Google Scholar
Gelman, A. (2019a). Comment on “post-hoc power using observed estimate of effect size is too noisy to be useful. Annals of Surgery, 270(2), e64.CrossRefGoogle ScholarPubMed
Gelman, A. (2019b). Don’t calculate post-hoc power using observed estimate of effect size. Annals of Surgery, 269(1), e9e10.CrossRefGoogle ScholarPubMed
Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 38.CrossRefGoogle Scholar
Greenland, S. (1987). Quantitative methods in the review of epidemiologic literature. Epidemiologic Reviews, 9(1), 130.Google Scholar
Greenland, S. (1994). Invited commentary: A critical look at some popular meta-analytic methods. American Journal of Epidemiology, 140(3), 290296.Google Scholar
Greenland, S. (2012). Nonsignificance plus high power does not imply support for the null over the alternative. Annals of Epidemiology, 22(5), 364368.Google Scholar
Greenland, S. (2017). Invited commentary: The need for cognitive science in methodology. American Journal of Epidemiology, 186(6), 639645.Google Scholar
Greenland, S., & O’Rourke, K. (2008). Meta-analysis. In Rothman, K. J., Greenland, S., & Lash, T. L. (Eds.). Modern Epidemiology, 3rd ed. Lippincott, Williams, and Wilkins.Google Scholar
Greenland, S., Schlesselman, J. J., & Criqui, M. H. (1986). The fallacy of employing standardized regression coefficients and correlations as measures of effect. American Journal of Epidemiology, 123(2), 203208.Google Scholar
Hartung, J., & Knapp, G. (2005). On confidence intervals for the among-group variance in the one-way random effects model with unequal error variances. Journal of Statistical Planning and Inference, 127(1–2), 157177.Google Scholar
Harville, D. A. (1977). Maximum likelihood approaches to variance component estimation and to related problems. Journal of the American Statistical Association, 72(358), 320338.Google Scholar
Hedges, L. V., & Vevea, J. L. (2005). Selection method approaches. In Rothstein, H. R., Sutton, A. J., & Borenstein, M. (Eds.). Publication Bias in Meta-analysis: Prevention, Assessment and Adjustments (pp. 145174). John Wiley & Sons.Google Scholar
Higgins, J. P., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21(11), 15391558.Google Scholar
Hoenig, J. M., & Heisey, D. M. (2001). The abuse of power: The pervasive fallacy of power calculations for data analysis. The American Statistician, 55(1), 1924.Google Scholar
Huedo-Medina, T. B., Sánchez-Meca, J., Marín-Martínez, F., & Botella, J. (2006). Assessing heterogeneity in meta-analysis: Q statistic or I2 index? Psychological Methods, 11(2), 193.Google Scholar
Ioannidis, J. P., Patsopoulos, N. A., & Evangelou, E. (2007). Uncertainty in heterogeneity estimates in meta-analyses. BMJ, 335, 914.Google Scholar
Iyengar, S. S., & Lepper, M. R. (2000). When choice is demotivating: Can one desire too much of a good thing? Journal of Personality and Social Psychology, 79(6), 9961006.CrossRefGoogle ScholarPubMed
Kalaian, H. A., & Raudenbush, S. W. (1996). A multivariate mixed linear model for meta-analysis. Psychological Methods, 1(3), 227235.Google Scholar
L’Abbé, K., Detsky, A. S., & O’Rourke, K. (1987). Meta-analysis in clinical research. Annals of Internal Medicine, 107(2), 224233.CrossRefGoogle ScholarPubMed
Lane, D. M., & Dunlap, W. P. (1978). Estimating effect size: Bias resulting from the significance criterion in editorial decisions. British Journal of Mathematical and Statistical Psychology, 31(2), 107112.Google Scholar
Light, R. J., & Pillemer, D. B. (1984). Summing Up: The Science of Reviewing Research. Harvard University Press.CrossRefGoogle Scholar
Linden, A. H., & Hönekopp, J. (2021). Heterogeneity of research results: A new perspective from which to assess and promote progress in psychological science. Perspectives on Psychological Science, 16(2), 358376.Google Scholar
McShane, B. B., & Böckenholt, U. (2014). You cannot step into the same river twice: When power analyses are optimistic. Perspectives on Psychological Science, 9(6), 612625.CrossRefGoogle ScholarPubMed
McShane, B. B., Böckenholt, U. (2017). Single paper meta-analysis: Benefits for study summary, theory-testing, and replicability. Journal of Consumer Research, 43(6), 10481063.Google Scholar
McShane, B. B., & Böckenholt, U. (2018a). Multilevel multivariate meta-analysis with application to choice overload. Psychometrika, 83(1), 255271.Google Scholar
McShane, B. B., & Böckenholt, U. (2018b). Want to make behavioural research more replicable? Promote single paper meta-analysis. Significance, 15(6), 3840.CrossRefGoogle Scholar
McShane, B. B., & Böckenholt, U. (2019). Meta-analysis. In Kardes, F. R., Herr, P. M., & Schwarz, N. (Eds.). Handbook of Research Methods in Consumer Psychology. Routledge.Google Scholar
McShane, B. B., & Böckenholt, U. (2020). Enriching meta-analytic models of summary data: A thought experiment and case study. Advances in Methods and Practices in Psychological Science, 3(1), 8193.Google Scholar
McShane, B. B., & Böckenholt, U. (2022a). Meta-analysis of studies with multiple contrasts and differences in measurement scales. Journal of Consumer Psychology, 32(1), 2340.Google Scholar
McShane, B. B., & Böckenholt, U. (2022b). Multilevel multivariate meta-analysis made easy: An introduction to MLMVmeta. Behavior Research Methods, online ahead of print. doi: 10.3758/s13428-022-01892-7.Google Scholar
McShane, B. B., Böckenholt, U., & Hansen, K. T. (2016). Adjusting for publication bias in meta-analysis: An evaluation of selection methods and some cautionary notes. Perspectives on Psychological Science, 11(5), 730749.Google Scholar
McShane, B. B., Böckenholt, U., & Hansen, K. T. (2020). Average power: A cautionary note. Advances in Methods and Practices in Psychological Science, 3(2), 185199.Google Scholar
McShane, B. B., Böckenholt, U. & Hansen, K. T. (2022). Variation and covariation in large-scale replication projects: An evaluation of replicability. Journal of the American Statistical Association, 117(540), 16051621. doi: 10.1080/01621459.2022.2054816Google Scholar
McShane, B. B., & Gal, D. (2016). Blinding us to the obvious? The effect of statistical training on the evaluation of evidence. Management Science, 62(6), 17071718.Google Scholar
McShane, B. B., & Gal, D. (2017). Statistical significance and the dichotomization of evidence. Journal of the American Statistical Association, 112(519), 885895.Google Scholar
McShane, B. B., Gal, D., Gelman, A., Robert, C., & Tackett, J. L. (2019). Abandon statistical significance. The American Statistician, 73(sup1), 235245.Google Scholar
McShane, B. B., Tackett, J. L., Böckenholt, U., & Gelman, A. (2019). Large scale replication projects in contemporary psychological research. The American Statistician, 73(supp1), 99105.Google Scholar
O’Rourke, K. (2002). Meta-analytical themes in the history of statistics: 1700 to 1938. Pakistan Journal of Statistics, 18(2), 285299.Google Scholar
O’Rourke, K. (2007). An historical perspective on meta-analysis: Dealing quantitatively with varying study results. Journal of the Royal Society of Medicine, 100(12), 579582.Google Scholar
Pearson, K. (1904). Report on certain enteric fever inoculation statistics. British Medical Journal, 3, 12431246.Google Scholar
Petitti, D. B. (1994). Of babies and bathwater. American Journal of Epidemiology, 140(9), 779782.Google Scholar
Pigott, T. (2012). Advances in Meta-analysis. Springer.CrossRefGoogle Scholar
Plackett, R. L. (1958). Studies in the history of probability and statistics: VII. The principle of the arithmetic mean. Biometrika, 45(1–2), 130135.Google Scholar
Rothstein, H. R., Sutton, A. J., & Borenstein, M., eds. (2005). Publication Bias in Meta-analysis: Prevention, Assessment and Adjustments. John Wiley & Sons.CrossRefGoogle Scholar
Rubin, D. B. (1992). Meta-analysis: Literature synthesis or effect-size surface estimation? Journal of Educational Statistics, 17(4), 363374.Google Scholar
Schimmack, U., & Brunner, J. (2017). -curve: A method for the estimating replicability based on test statistics in original studies. https://replicationindex.files.wordpress.com/2017/11/adv-meth-practices-draft-v17-12-08.pdfGoogle Scholar
Schmid, C. H., Stijnen, T., & White, I. (2020). Handbook of Meta-analysis. CRC Press.Google Scholar
Schmidt, F. L., & Hunter, J. E. (2014). Methods of Meta-analysis: Correcting Error and Bias in Research Findings. Sage Publications.Google Scholar
Schwartz, B. (2004). The Paradox of Choice: Why More is Less. Ecco.Google Scholar
Simmons, J. P., & Simonsohn, U. (2017). Power posing: P-curving the evidence. Psychological Science, 28(5), 687693.Google Scholar
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). -curve and effect size: Correcting for publication bias using only significant results. Perspectives on Psychological Science, 9(6), 666681.Google Scholar
Thompson, S. G. (1994). Systematic review: Why sources of heterogeneity in meta-analysis should be investigated. BMJ, 309(6965), 13511355.Google Scholar
Toffler, A. (1970). Future Shock. Bantam.Google Scholar
Tukey, J. W. (1969). Analyzing data: Sanctification or detective work? American Psychologist, 24(2), 8391.Google Scholar
Vevea, J. L., & Woods, C. M. (2005). Publication bias in research synthesis: Sensitivity analysis using a priori weight functions. Psychological Methods, 10(4), 428.Google Scholar
Viechtbauer, W. (2007). Confidence intervals for the amount of heterogeneity in meta-analysis. Statistics in Medicine, 26(1), 3752.CrossRefGoogle ScholarPubMed
Viechtbauer, W. (2021). for multilevel and multivariate models. www.metafor-project.org/doku.php/tips:i2_multilevel_multivariateGoogle Scholar
von Hippel, P. T. (2015). The heterogeneity statistic I2 can be biased in small meta-analyses. BMC Medical Research Methodology, 15(1), 35.Google Scholar
Wilkinson, L. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54(8), 594604.Google Scholar
Yuan, K.-H., & Maxwell, S. E. (2005). On the post hoc power in testing mean differences. Journal of Educational and Behavioral Statistics, 30(2), 141167.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×