Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-10T10:10:03.271Z Has data issue: false hasContentIssue false

Power considerations in bilingualism research: Time to step up our game

Published online by Cambridge University Press:  26 August 2020

Marc Brysbaert*
Affiliation:
Ghent University, Belgium
*
Address for correspondence: Marc Brysbaert, E-mail: marc.brysbaert@ugent.be

Abstract

Low power in empirical studies can be compared to blurred vision. It makes the signal ambiguous, so that conclusions depend more on interpretation than on observation. Data patterns that look sensible are published as evidence for theoretical positions and unclear patterns are discarded as noise, whereas both could be due to sampling error or could be a perfect reflection of the population parameters. Simulations indicate that little research with sample sizes lower than 100 participants per group provides a picture of enough resolution to draw firm conclusions. This is particularly true for research comparing groups of people and involving interaction effects. As a result, it is to be feared that many findings in bilingualism research do not have a firm base, certainly not if they go beyond a simple comparison of two within-participants conditions.

Type
Review Article
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Brunner, J and Schimmack, U (2020) Estimating Population Mean Power Under Conditions of Heterogeneity and Selection for Significance. Meta-Psychology, 4, MP.2018.874. DOI: https://doi.org/10.15626/MP.2018.874CrossRefGoogle Scholar
Brysbaert, M (2019) How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables. Journal of Cognition, 2, 16. DOI: http://doi.org/10.5334/joc.72CrossRefGoogle ScholarPubMed
Brysbaert, M and Stevens, M (2018) Power Analysis and Effect Size in Mixed Effects Models: A Tutorial. Journal of Cognition, 1, 9. DOI: http://doi.org/10.5334/joc.10CrossRefGoogle ScholarPubMed
De Bruin, A, Treccani, B and Della Sala, S (2015) Cognitive advantage in bilingualism: An example of publication bias? Psychological Science, 26, 99107.CrossRefGoogle ScholarPubMed
Funder, DC and Ozer, DJ (2019) Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2, 156168.CrossRefGoogle Scholar
Gelman, A and Carlin, J (2014) Beyond power calculations: Assessing type S (sign) and type M (magnitude) errors. Perspectives on Psychological Science, 9, 641651.CrossRefGoogle ScholarPubMed
Gelman, A and Loken, E (2013) The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Department of Statistics, Columbia University. Available at https://osf.io/n3axs.Google Scholar
Herzog, MH, Francis, GS and Clarke, A (2019) Understanding Statistics and Experimental Design: How to Not Lie with Statistics. Springer. Available at https://link.springer.com/content/pdf/10.1007/978-3-030-03499-3.pdfCrossRefGoogle Scholar
John, LK, Loewenstein, G and Prelec, D (2012) Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological science, 23, 524532.CrossRefGoogle ScholarPubMed
Judd, CM, McClelland, GH and Ryan, CS (2008) Data analysis: A model comparison approach (2nd edition). Routledge.Google Scholar
Kerr, NL (1998) HARKing: Hypothesizing After the Results Are Known. Personality and Social Psychology Review, 2, 196217. http://dx.doi.org/10.1207/s15327957pspr0203_4CrossRefGoogle ScholarPubMed
Kim, JY (2020) Discrepancy between heritage speakers' use of suprasegmental cues in the perception and production of Spanish lexical stress. Bilingualism: Language and Cognition, 118. Advance publication available at: DOI: https://doi.org/10.1017/S1366728918001220Google Scholar
LeBel, EP, Campbell, L and Loving, TJ (2017) Benefits of open and high-powered research outweigh costs. Journal of Personality and Social Psychology, 113, 230243.CrossRefGoogle ScholarPubMed
Lehtonen, M, Soveri, A, Laine, A, Järvenpää, J, De Bruin, A and Antfolk, J (2018) Is bilingualism associated with enhanced executive functioning in adults? A meta-analytic review. Psychological Bulletin, 144, 394.CrossRefGoogle ScholarPubMed
Loiselle, D and Ramchandra, R (2015) A counterview of ‘An investigation of the false discovery rate and the misinterpretation of p-values’ by Colquhoun (2014). Royal Society Open Science, 2, 150217. DOI: https://doi.org/10.1098/rsos.150217CrossRefGoogle Scholar
ManyBabies Consortium. (2020) Quantifying sources of variability in infancy research using the infant-directed speech preference. Advances in Methods and Practices in Psychological Science, 3, 2452.CrossRefGoogle Scholar
Maxwell, SE, Lau, MY and Howard, GS (2015) Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist, 70, 487498.CrossRefGoogle ScholarPubMed
McElreath, R and Smaldino, PE (2015) Replication, Communication, and the Population Dynamics of Scientific Discovery. PloS One, 10, e0136088. https://doi.org/10.1371/journal.pone.0136088.CrossRefGoogle ScholarPubMed
Nichols, ES, Wild, CJ, Stojanoski, B, Battista, ME and Owen, AM (2020) Bilingualism Affords No General Cognitive Advantages: A Population Study of Executive Function in 11,000 People. Psychological Science. Preprint avaialable at https://doi.org/10.1177/0956797620903113CrossRefGoogle ScholarPubMed
Paap, K, Mason, L, Zimiga, B, Silva, Y and Frost, M (2020) The Alchemy of Confirmation Bias Transmutes Expectations into Bilingual Advantages: A Tale of Two New Meta-Analyses. Quarterly Journal of Experimental Psychology. Preprint available at DOI: 10.1177/1747021819900098.CrossRefGoogle ScholarPubMed
Perugini, M, Gallucci, M and Costantini, G (2018) A Practical Primer to Power Analysis for Simple Experimental Designs. International Review of Social Psychology, 31, 20. DOI: https://doi.org/10.5334/irsp.181CrossRefGoogle Scholar
Rosenthal, R (1979) The file drawer problem and tolerance for null results. Psychological Bulletin, 86, 638.CrossRefGoogle Scholar
Shrout, PE and Rodgers, JL (2018) Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual Review of Psychology, 69, 487510.CrossRefGoogle ScholarPubMed
Simmons, JP, Nelson, LD and Simonsohn, U (2011) False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 13591366. DOI: https://doi.org/10.1177/0956797611417632CrossRefGoogle ScholarPubMed
Simmons, JP, Nelson, LD and Simonsohn, U (2018) False-positive citations. Perspectives on Psychological Science, 13, 255259.CrossRefGoogle ScholarPubMed
Simonsohn, U (2014, March 12). No-way interactions [Blog post]. Retrieved from http://datacolada.org/17. DOI: https://doi.org/10.15200/winn.142559.90552CrossRefGoogle Scholar
Sterne, J, Becker, B and Egger, M (2005) The funnel plot. In Rothstein, HR, Sutton, AJ and Borenstein, M (eds), Publication Bias in Meta-Analysis: Prevention, Assessment and Adjustments. John Wiley and Sons, pp. 7598.Google Scholar
Vasishth, S, Mertzen, D, Jäger, LA and Gelman, A (2018) The statistical significance filter leads to overoptimistic expectations of replicability. Journal of Memory and Language, 103, 151175.CrossRefGoogle Scholar
Von der Malsburg, T and Angele, B (2017) False positives and other statistical errors in standard analyses of eye movements in reading. Journal of Memory and Language, 94, 119133.CrossRefGoogle ScholarPubMed