Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-26T04:08:15.915Z Has data issue: false hasContentIssue false

Part II - Important Methodological Considerations

Published online by Cambridge University Press:  12 December 2024

John E. Edlund
Affiliation:
Rochester Institute of Technology, New York
Austin Lee Nichols
Affiliation:
Central European University, Vienna
Get access
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

References

Agresti, A., & Finlay, B. (2008). Statistical Methods for the Social Sciences. CRC Press.Google Scholar
Arnold, S. F. (1990). Mathematical Statistics. Prentice Hall.Google Scholar
Bartholomew, D. J. (1996). The Statistical Approach to Social Measurement. Academic Press.Google Scholar
Bevington, P. R., & Robinson, D. K. (2003). Data Reduction and Error Analysis for the Physical Sciences . McGraw-Hill.Google Scholar
Bollen, K. A. (1980). Issues in the comparative measurement of political democracy. American Sociological Review, 45, 370390.CrossRefGoogle Scholar
Bollen, K. A. (1989). Structural Equations with Latent Variables. Wiley.CrossRefGoogle Scholar
Casella, G., & Berger, J. O. (2002). Statistical Inference. Wadsworth.Google Scholar
Crocker, L., & Algina, J. (1986). Introduction to Classical and Modern Test Theory. Harcourt College Publishers.Google Scholar
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297334.CrossRefGoogle Scholar
DasGupta, A. (2008). Asymptotic Theory of Statistics and Probability. Springer.Google Scholar
Efron, B., & Tibshiriani, R. J. (1993). An Introduction to the Bootstrap. Chapman Hall/CRC.CrossRefGoogle Scholar
Embretson, S. E., & Reise, S. P. (2000). Item Response Theory for Psychologists. Lawrence Erlbaum Associates.Google Scholar
Hu, L.-T., Bentler, P. M., & Kano, Y. (1992). Can test statistics in covariance structure analysis be trusted? Psychological Bulletin, 112(2), 351362.CrossRefGoogle ScholarPubMed
Jöreskog, K. G. (1971). Statistical analysis of sets of congeneric tests. Psychometrika, 36, 109133.CrossRefGoogle Scholar
Judd, C. M., McClelland, G. H., & Ryan, C. S. (2017). Data analysis: A Model Comparison Approach to Regression, ANOVA, and Beyond, 3rd ed. Routledge.CrossRefGoogle Scholar
Kelley, T. L. (1927). Interpretation of Educational Measurement. World Book Company.Google Scholar
Lord, F. M., & Novick, M. (1968). Statistical Theories of Mental Test Scores. Wesley.Google Scholar
Maxwell, A. E. (1967). The effect of correlated errors on estimates of reliability coefficients. Education and Psychological Measurement, 28, 803811.CrossRefGoogle Scholar
McDonald, R. P. (1999). Test Theory: A Unified Treatment. Lawrence Erlbaum Associates.Google Scholar
McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological Methods, 23, 412433.CrossRefGoogle Scholar
Novick, M. R., & Lewis, C. (1967). Coefficient alpha and the reliability of composite measurements. Psychometrika, 32, 113.CrossRefGoogle ScholarPubMed
Rao, C. R. (1973). Linear Statistical Inference and Its Applications. Wiley.CrossRefGoogle Scholar
Raykov, T. (1997). Scale reliability, Cronbach’s coefficient alpha, and violations of essential tau-equivalence for fixed congeneric components. Multivariate Behavioral Research, 32, 329354.CrossRefGoogle ScholarPubMed
Raykov, T. (2001a). Bias of coefficient alpha for congeneric measures with correlated errors. Applied Psychological Measurement, 25, 6976.CrossRefGoogle Scholar
Raykov, T. (2001b). Estimation of congeneric scale reliability via covariance structure models with nonlinear constraints. British Journal of Mathematical and Statistical Psychology, 54, 315323.CrossRefGoogle Scholar
Raykov, T. (2007). Reliability if deleted, not “alpha if deleted”: Evaluation of scale reliability following component deletion. British Journal of Mathematical and Statistical Psychology, 60, 201216.CrossRefGoogle Scholar
Raykov, T. (2008). “Alpha if item deleted”: A note on loss of criterion validity in scale development if maximising coefficient alpha. British Journal of Mathematical and Statistical Psychology, 61, 275285.CrossRefGoogle Scholar
Raykov, T. (2009). Evaluation of scale reliability for unidimensional measures using latent variable modeling. Measurement and Evaluation in Counseling and Development, 42, 222232.CrossRefGoogle Scholar
Raykov, T. (2012). Scale development using structural equation modeling. In Hoyle, R. (ed.), Handbook of Structural Equation Modeling (pp. 472492). Guilford Press.Google Scholar
Raykov, T. (2019a). Strong consistency of reliability estimators for multiple-component measuring instruments. Structural Equation Modeling, 26, 750756.CrossRefGoogle Scholar
Raykov, T. (2019b). Strong convergence of the coefficient alpha estimator for reliability of multiple-component measuring instruments. Structural Equation Modeling, 26, 430436.CrossRefGoogle Scholar
Raykov, T. (2023). Psychometric scale evaluation using structural equation and latent variable modeling. In Hoyle, R. (ed.), Handbook of Structural Equation Modeling, 2nd ed. Guilford Press.Google Scholar
Raykov, T., Anthony, J. C., & Menold, N. (2023). On the importance of coefficient alpha for measurement research: Loading equality is not necessary for alpha’s utility as a scale reliability index. Educational and Psychological Measurement, 83(4), 766781.CrossRefGoogle Scholar
Raykov, T., Dimitrov, D. M., & Asparouhov, T. (2010). Evaluation of scale reliability with binary measures using latent variable modeling. Structural Equation Modeling, 17, 122132.CrossRefGoogle Scholar
Raykov, T., Doebler, P., & Marcoulides, G. A. (2022). Applications of Bayesian confirmatory factor analysis in behavioral measurement: Strong convergence of a Bayesian parameter estimator. Measurement: Interdisciplinary Research and Perspectives, 20(4), 215227.Google Scholar
Raykov, T., & Marcoulides, G. A. (2004). Using the delta method for approximate interval estimation of parametric functions in covariance structure models. Structural Equation Modeling, 11, 659675.CrossRefGoogle Scholar
Raykov, T., & Marcoulides, G. A. (2006). A first Course in Structural Equation Modeling. Lawrence Erlbaum Associates.Google Scholar
Raykov, T., & Marcoulides, G. A. (2011). Introduction to Psychometric Theory. Routledge.CrossRefGoogle Scholar
Raykov, T., & Marcoulides, G. A. (2015a). A direct latent variable modeling-based procedure for evaluation of coefficient alpha. Educational and Psychological Measurement, 75, 146156.CrossRefGoogle Scholar
Raykov, T., & Marcoulides, G. A. (2015b). Scale reliability evaluation in heterogeneous populations. Educational and Psychological Measurement, 75, 875892.CrossRefGoogle ScholarPubMed
Raykov, T., & Marcoulides, G. A. (2016a). Scale reliability evaluation under multiple assumption violations. Structural Equation Modeling, 23, 302313.CrossRefGoogle Scholar
Raykov, T., & Marcoulides, G. A. (2016b). On examining specificity in latent construct indicators. Structural Equation Modeling, 23, 845855.CrossRefGoogle Scholar
Raykov, T., & Marcoulides, G. A. (2018). A Course in Item Response Theory and Modeling with Stata. Stata Press.Google Scholar
Raykov, T., & Marcoulides, G. A. (2019). Thanks, coefficient alpha – we still need you! Educational and Psychological Measurement, 79, 200210.CrossRefGoogle ScholarPubMed
Raykov, T., & Marcoulides, G. A. (2021). On the pitfalls of estimating and using standardized reliability coefficients. Educational and Psychological Measurement, 791810.CrossRefGoogle Scholar
Raykov, T., & Marcoulides, G. A. (2023). Evaluating the discrepancy between scale reliability and Cronbach’s coefficient alpha using latent variable modeling. Measurement: Interdisciplinary Research and Perspectives, 21(1), 2937.Google Scholar
Raykov, T., Marcoulides, G. A., & Chang, C. (2016). Studying population heterogeneity in finite mixture settings using latent variable modeling. Structural Equation Modeling, 23, 726730.CrossRefGoogle Scholar
Raykov, T., Marcoulides, G. A., & Li, T. (2016). Measurement instrument validity evaluation in finite mixtures. Educational and Psychological Measurement, 76, 10261044.CrossRefGoogle Scholar
Raykov, T., Marcoulides, G. A., & Patelis, T. (2015). The importance of the assumption of uncorrelated errors in psychometric theory. Educational and Psychological Measurement, 75, 634647.CrossRefGoogle ScholarPubMed
Raykov, T., & Shrout, P. E. (2002). Reliability of scales with general structure: Point and interval estimation using a structural equation modeling approach. Structural Equation Modeling, 9, 195212.CrossRefGoogle Scholar
Raykov, T., West, B. T., & Traynor, A. (2015). Evaluation of coefficient alpha for multiple component measuring instruments in complex sample designs. Structural Equation Modeling, 22, 429438.CrossRefGoogle Scholar
Rhemtulla, M., Brosseau-Liard, P. E., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17, 354373.CrossRefGoogle ScholarPubMed
Sijtsma, K. (2009). On the use, the misuse, and the very limited usefulness of Cronbach’s alpha. Psychometrika, 74, 107120.CrossRefGoogle ScholarPubMed
Spearman, C. (1904). “General intelligence,” objectively determined and measured. American Journal of Psychology, 15, 201292.CrossRefGoogle Scholar
Yang, Y., & Green, S. B. (2009). Reliability of summed item scores using structural equation modeling: An alternative to coefficient alpha. Psychometrika, 75, 155167.Google Scholar
Zimmerman, D. W. (1972). Test reliability and the Kuder-Richardson formulas: Derivation from probability theory. Educational and Psychological Measurement, 32, 939954.CrossRefGoogle Scholar
Zimmerman, D. W. (1975). Probability spaces, Hilbert spaces, and the axioms of test theory. Psychometrika, 40, 395412.CrossRefGoogle Scholar

References

Ajzen, I. (2005). The influence of attitudes on behavior. In Albarracín, D., Johnson, B. T., & Zanna, M. P. (eds.), The Handbook of Attitudes (pp. 173221). Lawrence Erlbaum Associates.Google Scholar
American Psychological Association (2019, August). Depression Assessment Instruments. www.apa.org/depression-guideline/assessmentGoogle Scholar
Bode, B., King, A., Russell-Jones, D., & Billings, L. K. (2021). Leveraging advances in diabetes technologies in primary care: A narrative review. Annals of Medicine, 53(1), 805816. https://doi.org/10.1080/07853890.2021.1931427CrossRefGoogle ScholarPubMed
Borsboom, D., Mellenbergh, G. J., & Van Heerden, J. (2004). The concept of validity. Psychological Review, 111(4), 10611071. https://doi.org/10.1037/0033-295X.111.4.1061CrossRefGoogle ScholarPubMed
Buntins, M., Buntins, K., & Eggert, F. (2017). Clarifying the concept of validity: From measurement to everyday language. Theory & Psychology, 27(5), 703710. https://doi.org/10.1177/0959354317702256CrossRefGoogle Scholar
Camargo, S. L., Herrera, A. N., & Traynor, A. (2018). Looking for a consensus in the discussion about the concept of validity: A Delphi study. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 14(4), 146155. https://doi.org/10.1027/1614-2241/a000157CrossRefGoogle Scholar
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the Multitrait-Multimethod Matrix. Psychological Bulletin, 56(2), 81105. http://dx.doi.org/10.1037/h0046016CrossRefGoogle ScholarPubMed
Chartrand, T. L., & Lakin, J. L. (2013). The antecedents and consequences of human behavioral mimicry. Annual Review of Psychology, 64, 285308. https://doi.org/10.1037/h0046016CrossRefGoogle ScholarPubMed
Combe, G. (1835). The Constitution of Man, Considered in Relation to External Objects. John Anderson Jr.Google Scholar
Combe, G. (1851). A System of Phrenology. Benjamin B. Mussey.Google Scholar
Conan Doyle, A. (1981). The Original Illustrated Sherlock Holmes. Castle Books.Google Scholar
Costa-Gomes, M.A., Ju, Y., & Li, J. (2019). Role reversal consistency: An experimental study of the golden rule. Economic Inquiry, 57(1), 685704. https://doi.org/10.1111/ecin.12708CrossRefGoogle Scholar
Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 92(6), 10871101. https://doi.org/10.1037/0022-3514.92.6.1087CrossRefGoogle ScholarPubMed
Eling, P., & Finger, S. (eds.) (2021). Gall, Spurzheim, and the Phrenological Movement: Insights and Perspectives. Routledge.CrossRefGoogle Scholar
Falk, C. F., Heine, S. J., Takemura, K., Zhang, C. X., & Hsu, C.-W. (2015). Are implicit self-esteem measures valid for assessing individual and cultural differences? Journal of Personality, 83, 5668. https://doi.org/10.1111/jopy.12082CrossRefGoogle ScholarPubMed
Foster, S. L., & Cone, J. D. (1995). Validity issues in clinical assessment. Psychological Assessment, 7(3), 248260. https://doi.org/10.1037/1040-3590.7.3.248CrossRefGoogle Scholar
Fuentes, A., Ackermann, R. R., Athreya, S., Bolnik, D., Lasisi, T., Lee, S., et al. (2019). AAPA statement on race and racism. American Journal of Physical Anthropology, 169 (3), 400402. https://doi.org/10.1002/ajpa.23882CrossRefGoogle ScholarPubMed
Gardner, M. (1957). From bumps to handwriting. In Gardner, Fads and Fallacies in the Name of Science. Dover.Google Scholar
Ghandnoosh, N. (2014, September 3). Race and punishment: Racial perceptions of crime and support for punitive policies. The Sentencing Project. www.sentencingproject.org/publications/race-and-punishment-racial-perceptions-of-crime-and-support-for-punitive-policiesGoogle Scholar
Goodman, A. H., Moses, Y. T., & Jones, J. L. (2020). Race: Are We So Different? 2nd ed. John Wiley & Sons / American Anthropological Association.Google Scholar
Greenblatt, S. H. (1995). Phrenology in the science and culture of the 19th century. Neurosurgery, 37(4), 790805. https://doi.org/10.1227/00006123-199510000-00025CrossRefGoogle ScholarPubMed
Herrnstein, R. J., & Murray, C. (1994). The Bell Curve: Intelligence and class Structure in American Life. Free Press.Google Scholar
Hollander, B. (1902). Scientific Phrenology: Being a Practical Mental Science and Guide to Human Character. Grant Richards.Google Scholar
Inzlicht, M., McKay, L., & Aronson, J. (2006). Stigma as ego depletion: How being the target of prejudice affects self-control. Psychological Science, 17(3), 262269. https://doi.org/10.1111/j.1467-9280.2006.01695.xCrossRefGoogle ScholarPubMed
Jones, N. (2020). Carbon dating, the archaeological workhorse, is getting a major reboot. Nature News, May 19. https://doi.org/10.1038/d41586-020-01499-yCrossRefGoogle Scholar
Lilienfeld, S. O. (2017). Microaggressions: Strong claims, inadequate evidence. Perspectives on Psychological Science, 12(1), 138169. https://doi.org/10.1177/1745691616659391CrossRefGoogle Scholar
Lissitz, Robert W. (ed.). (2009). The Concept of Validity: Revisions, New Directions, and Applications. IAP Information Age.Google Scholar
Milner, J. S. (1994). Assessing physical child abuse risk: The Child Abuse Potential Inventory. Clinical Psychology Review, 14(6), 547583. https://doi.org/10.1016/0272-7358(94)90017-5CrossRefGoogle Scholar
Milner, J. S. (2022a). The Child Abuse Potential Inventory: Manual. Psytec.Google Scholar
Milner, J. S. (2022b). An Interpretive Manual for the Child Abuse Potential Inventory, rev. ed. Psytec.Google Scholar
Milner, J. S., & Crouch, J. L. (2012). Psychometric characteristics of translated versions of the Child Abuse Potential Inventory. Psychology of Violence, 2(3), 239259. https://doi.org/10.1037/a0026957CrossRefGoogle Scholar
Milner, J. S., & Crouch, J. L. (2017). Child physical abuse risk assessment: Parent and family evaluations. In Campbell, J. C. & Messing, J. T. (eds.), Assessing Dangerousness: Domestic Violence Offenders and Child Abusers, 3rd ed. (pp. 5588). Springer.Google Scholar
Parker Jones, O., Alfaro-Almagro, F., & Jbabdi, S. (2018). An empirical, 21st century evaluation of phrenology. Cortex, 106, 2835. https://doi.org/10.1016/j.cortex.2018.04.011CrossRefGoogle ScholarPubMed
Ponnock, A., Muenks, K., Morell, M., Seung Yang, J., Gladstone, J. R., & Wigfield, A. (2020). Grit and conscientiousness: Another jangle fallacy. Journal of Research in Personality, 89, 15. https://doi.org/10.1016/j.jrp.2020.104021CrossRefGoogle Scholar
Rudman, L. A., Greenwald, A. G., Mellott, D. S., & Schwartz, J. L. K. (1999). Measuring the automatic components of prejudice: Flexibility and generality of the Implicit Association Test. Social Cognition, 17(4), 437465. https://doi.org/10.1521/soco.1999.17.4.437CrossRefGoogle Scholar
Sackett, P. R, Putka, D. J, & McCloy, R. A. (2012). The concept of validity and the process of validation. In Schmitt, N. (ed.), The Oxford Handbook of Personnel Assessment and Selection (pp. 91118). Oxford University Press.CrossRefGoogle Scholar
Schimmack, U. (2021). The Implicit Association Test: A method in search of a construct. Perspectives on Psychological Science, 16(2), 396414. https://doi.org/10.1177/1745691619863798CrossRefGoogle Scholar
Sue, D. W., Capodilupo, C. M., Torino, G. C., Bucceri, J. M., Holder, A. M. B., Nadal, K. L., & Esquilin, M. (2007). Racial microaggressions in everyday life: Implications for clinical practice. American Psychologist, 62(4), 271286. https://doi.org/10.1037/0003-066X.62.4.271CrossRefGoogle ScholarPubMed
Thomas, C., & Skowronski, J. J. (2020). The study of microaggressive behavior: Reflections on the construct, construct-relevant research, and possible future research. In Nadler, J. T. & Voyles, E. C. (eds.), Stereotypes: The Incidence and Impacts of Bias (pp. 4569). Praeger.CrossRefGoogle Scholar
Thompson, C. E. (2021). An Organ of Murder: Crime, Violence, and Phrenology in Nineteenth-Century America. Rutgers University Press.CrossRefGoogle Scholar
Tynan, M. C. (2021). Deconstructing Grit’s validity: The case for revising Grit measures and theory. In van Zyl, L. E., Olckers, C., & van der Vaart, E. (eds.), Multidisciplinary perspectives on Grit: Contemporary Theories, Assessments, Applications and Critiques (pp. 137155). Springer.CrossRefGoogle Scholar
Vazire, S., Schiavone, S. R., & Bottesini, J. G. (2022). Credibility beyond replicability: Improving the four validities in psychological science. Current Directions in Psychological Science, 31(2), 162168. https://doi.org/10.1177/09637214211067779CrossRefGoogle Scholar
Vintage News Daily. (2017, September 20). Bad invention: Psychograph, a phrenology machine to measure the shape of your head from the early 20th century. https://vintagenewsdaily.com/bad-invention-psychograph-a-phrenology-machine-to-measure-the-shape-of-your-head-from-the-early-20th-centuryGoogle Scholar

References

Aerts, M., Molenberghs, G., & Thas, O. (2021). Graduate education in statistics and data science: The why, when, where, who, and what. Annual Review of Statistics and Its Application, 8(1), 2539. https://doi.org/10.1146/annurev-statistics-040620-032820CrossRefGoogle Scholar
Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias and uncertainty. Psychological Science, 28(11), 15471562. https://doi.org/10.1177/0956797617723724CrossRefGoogle Scholar
Anvari, F., & Lakens, D. (2021). Using anchor-based methods to determine the smallest effect size of interest. Journal of Experimental Social Psychology, 96, 104159. https://doi.org/10.1016/j.jesp.2021.104159CrossRefGoogle Scholar
Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E.-J., Berk, R., et al. (2018). Redefine statistical significance. Nature Human Behaviour, 2(1), 610. https://doi.org/10.1038/s41562-017-0189-zCrossRefGoogle ScholarPubMed
Buchanan, E. M., Valentine, K. D., & Maxwell, N. P. (2019). LAB: Linguistic annotated bibliography a searchable portal for normed database information. Behavior Research Methods, 51(4), 18781888. https://doi.org/10.3758/s13428-018-1130-8CrossRefGoogle ScholarPubMed
Champely, S., Ekstrom, C., Dalgaard, P., Gill, J., Weibelzahl, S., Anandkumar, A., et al. (2017). Pwr: Basic functions for power analysis [software]. https://cran.r-project.org/web/packages/pwrGoogle Scholar
Cohen, J. (2013). Statistical Power Analysis for the Behavioral Sciences. Routledge. https://doi.org/10.4324/9780203771587CrossRefGoogle Scholar
Coles, N. A., Hamlin, J. K., Sullivan, L. L., Parker, T. H., & Altschul, D. (2022). Build up big-team science. Nature, 601(7894), 505507. https://doi.org/10.1038/d41586-022-00150-2CrossRefGoogle ScholarPubMed
Cuccolo, K., Irgens, M. S., Zlokovich, M. S., Grahe, J., & Edlund, J. E. (2021). What crowdsourcing can offer to cross-cultural psychological science. Cross-Cultural Research, 55(1), 328. https://doi.org/10.1177/1069397120950628CrossRefGoogle Scholar
DeBruine, L. (2021). Faux: Simulation for Factorial Designs [software]. Zenodo. https://doi.org/10.5281/ZENODO.2669586Google Scholar
Dienes, Z. (2008). Understanding Psychology as a Science. Palgrave Macmillan.Google Scholar
Dienes, Z. (2023). Testing theories with Bayes factors. In Nichols, A. L. & Edlund, J. E. (eds.), Cambridge Handbook of Research Methods and Statistics for the Social and Behavioral Sciences (vol. 1, pp. 494512). Cambridge University Press.CrossRefGoogle Scholar
Dziak, J. J., Dierker, L. C., & Abar, B. (2020). The interpretation of statistical power after the data have been gathered. Current Psychology, 39(3), 870877. https://doi.org/10.1007/s12144-018-0018-1CrossRefGoogle ScholarPubMed
Erdfelder, E., Faul, F., & Buchner, A. (1996). GPOWER: A general power analysis program. Behavior Research Methods, Instruments, & Computers, 28(1), 111. https://doi.org/10.3758/BF03203630CrossRefGoogle Scholar
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175191. https://doi.org/10.3758/BF03193146CrossRefGoogle Scholar
Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 15021505. https://doi.org/10.1126/science.1255484CrossRefGoogle ScholarPubMed
Gignac, G. E., & Szodorai, E. T. (2016). Effect size guidelines for individual differences researchers. Personality and Individual Differences, 102, 7478. https://doi.org/10.1016/j.paid.2016.06.069CrossRefGoogle Scholar
Goldfeld, K., & Wujciak-Jens, J. (2020). Simstudy: Illuminating research methods through data generation. Journal of Open Source Software, 5(54), 2763. https://doi.org/10.21105/joss.02763CrossRefGoogle Scholar
Gümüş, M. M., & Kukul, V. (2023). Developing a digital competence scale for teachers: Validity and reliability study. Education and Information Technologies, 28(3), 27472765. https://doi.org/10.1007/s10639-022-11213-2CrossRefGoogle ScholarPubMed
Heckman, M. G., Davis, J. M., & Crowson, C. S. (2022). Post hoc power calculations: An inappropriate method for interpreting the findings of a research study. Journal of Rheumatology, 49(8), 867870. https://doi.org/10.3899/jrheum.211115CrossRefGoogle ScholarPubMed
JASP Team. (2022). JASP (version 0.16.3)[ software]. https://jasp-stats.orgGoogle Scholar
Kelley, K. (2007). Sample size planning for the coefficient of variation from the accuracy in parameter estimation approach. Behavior Research Methods, 39(4), 755766. https://doi.org/10.3758/BF03192966CrossRefGoogle ScholarPubMed
Kelley, K. (2022). MBESS: The MBESS r package [software]. https://CRAN.R-project.org/package=MBESSGoogle Scholar
Kelley, K., Darku, F. B., & Chattopadhyay, B. (2018). Accuracy in parameter estimation for a general class of effect sizes: A sequential approach. Psychological Methods, 23(2), 226243. https://doi.org/10.1037/met0000127CrossRefGoogle ScholarPubMed
Koch, C., & Jones, A. (2016). Big science, team science, and open science for neuroscience. Neuron, 92(3), 612616. https://doi.org/10.1016/j.neuron.2016.10.019CrossRefGoogle ScholarPubMed
Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4. https://doi.org/10.3389/fpsyg.2013.00863CrossRefGoogle Scholar
Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., et al. (2018). Justify your alpha. Nature Human Behaviour, 2(3), 168171. https://doi.org/10.1038/s41562-018-0311-xCrossRefGoogle Scholar
Legaki, N.-Z., Xi, N., Hamari, J., Karpouzis, K., & Assimakopoulos, V. (2020). The effect of challenge-based gamification on learning: An experiment in the context of statistics education. International Journal of Human-Computer Studies, 144, 102496. https://doi.org/10.1016/j.ijhcs.2020.102496CrossRefGoogle ScholarPubMed
Matli, W., & Ngoepe, M. (2020). Capitalizing on digital literacy skills for capacity development of people who are not in education, employment or training in South Africa. African Journal of Science, Technology, Innovation and Development, 12(2), 129139. https://doi.org/10.1080/20421338.2019.1624008CrossRefGoogle Scholar
Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59, 537563. https://doi.org/10.1146/annurev.psych.59.103006.093735CrossRefGoogle ScholarPubMed
McElreath, R. (2020). Statistical Rethinking: A Bayesian Course with Examples in R and STAN, 2nd ed. Routledge.CrossRefGoogle Scholar
Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34(2), 103115. https://doi.org/10.1086/288135CrossRefGoogle Scholar
Miller, J., & Ulrich, R. (2019). The quest for an optimal alpha. PLOS ONE, 14(1), e0208631. https://doi.org/10.1371/journal.pone.0208631CrossRefGoogle ScholarPubMed
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716Google Scholar
Pangrazio, L., Godhe, A.-L., & Ledesma, A. G. L. (2020). What is digital literacy? A comparative review of publications across three language contexts. E-Learning and Digital Media, 17(6), 442459. https://doi.org/10.1177/2042753020946291CrossRefGoogle Scholar
Pek, J., Pitt, M. A., & Wegener, D. T. (in press). Uncertainty limits the use of power analysis. Journal of Experimental Psychology: General.Google Scholar
Porras-Segovia, A., Molina-Madueño, R. M., Berrouiguet, S., López-Castroman, J., Barrigón, M. L., Pérez-Rodríguez, M. S., et al. (2020). Smartphone-based ecological momentary assessment (EMA) in psychiatric patients and student controls: A real-world feasibility study. Journal of Affective Disorders, 274, 733741. https://doi.org/10.1016/j.jad.2020.05.067CrossRefGoogle ScholarPubMed
R Core Team. (2022). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. www.gbif.org/tool/81287/r-a-language-and-environment-for-statistical-computingGoogle Scholar
Ralston, K. (2020). “Sociologists shouldn’t have to study statistics”: Epistemology and anxiety of statistics in sociology students. Sociological Research Online, 25(2), 219235. https://doi.org/10.1177/1360780419888927CrossRefGoogle Scholar
Riesthuis, P., Mangiulli, I., Broers, N., & Otgaar, H. (2022). Expert opinions on the smallest effect size of interest in false memory research. Applied Cognitive Psychology, 36(1), 203215. https://doi.org/10.1002/acp.3911CrossRefGoogle Scholar
Seaborn, K., & Fels, D. I. (2015). Gamification in theory and action: A survey. International Journal of Human-Computer Studies, 74, 1431. https://doi.org/10.1016/j.ijhcs.2014.09.006CrossRefGoogle Scholar
Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessment. Annual Review of Clinical Psychology, 4(1), 132. https://doi.org/10.1146/annurev.clinpsy.3.022806.091415CrossRefGoogle ScholarPubMed
Silberzahn, R., Uhlmann, E. L., Martin, D. P., Anselmi, P., Aust, F., Awtrey, E., et al. (2018). Many analysts, one data set: Making transparent how variations in analytic choices affect results. Advances in Methods and Practices in Psychological Science, 1(3), 337356.CrossRefGoogle Scholar
US Bureau of Labor Statistics. (n.d.). Occupational Outlook Handbook. www.bls.gov/ooh/math/data-scientists.htmGoogle Scholar
Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science, 13(4), 411417. https://doi.org/10.1177/1745691617751884CrossRefGoogle ScholarPubMed
Vevea, J. L., & Woods, C. M. (2005). Publication bias in research synthesis: Sensitivity analysis using a priori weight functions. Psychological Methods, 10, 428443. https://doi.org/10.1037/1082-989X.10.4.428CrossRefGoogle ScholarPubMed

References

Bergmann, M., Jahn, T., Knobloch, T., Krohn, W., Pohl, C., & Schramm, E. (2012). Methods for Transdisciplinary Research: A Primer for Practice. Campus.Google Scholar
British Academy. (2021). Knowledge Exchange in the SHAPE Disciplines. www.thebritishacademy.ac.uk/publications/knowledge-exchange-in-the-shape-disciplinesGoogle Scholar
Bromme, R. (2000). Beyond one’s own perspective: The psychology of cognitive interdisciplinarity. In Weingart, P. and Stehr, N. (eds.), Practicing Interdisciplinarity (pp. 115133). University of Toronto Press.CrossRefGoogle Scholar
Buzan, T. (2010). The Mindmap Book. BBC Books.Google Scholar
Fam, D., & O’Rourke, M. (eds.). (2021). Interdisciplinary and Transdisciplinary Failures: Lessons Learned from Cautionary Tales. Routledge.Google Scholar
Frickel, S., et al. (eds.). (2016). Investigating Interdisciplinary Collaboration: Theory and Practice across Disciplines. Rutgers University Press.Google Scholar
Galison, P. (1997). Image and Logic: A Material Culture of Microphysics. University of Chicago PressGoogle Scholar
Hall, K. L., Vogel, A. L., & Croyle, R. T. (eds.). (2019). Strategies for Team Science Success: Handbook of Evidence-Based Principles for Cross-Disciplinary Science and Practical Lessons Learned from Health Researchers. Springer.CrossRefGoogle Scholar
Hesse-Bulber, S., & Johnson, R. B. (eds.). (2015). Oxford Handbook of Multimethod and Mixed Methods Research Inquiry. Oxford University Press.CrossRefGoogle Scholar
Hirsch Hadorn, G., Hoffmann-Riem, H., Biber-Klemm, S., Grossenbacher-Mansuy, W., Joye, D., Pohl, C., et al. (eds.). (2008). Handbook of Transdisciplinary Research. Springer.CrossRefGoogle Scholar
Holley, K., & Szostak, R. (2022). Interdisciplinary education and research in North America. In Klein, J. T. and Baptista, B. V. (eds.), Interdisciplinarity and Transdisciplinarity: Institutionalizing Collaboration across Cultures and Communities. Routledge.Google Scholar
Jacobs, Jerry A. (2013). In Defense of Disciplines: Interdisciplinarity and Specialization in the Research University. University of Chicago Press.Google Scholar
Laursen, B. K., Motzer, N., & Anderson, K. J. (2022). Pathway profiles: Learning from five main approaches to assessing interdisciplinarity. Research Evaluation, 32(2), 213227. https://doi.org/10.1093/reseval/rvac036CrossRefGoogle Scholar
Lumsden, K. (2019). Introduction: The reflexive turn and the social sciences. In Lumsden, K., Bradford, J., & Goode, J. (eds.), Reflexivity: Theory, Method, and Practice (pp. 136). Routledge.CrossRefGoogle Scholar
Lyall, C. (2019). Being an Interdisciplinary Academic: How Institutions Shape Careers. Palgrave Macmillan.CrossRefGoogle Scholar
Lyall, C., Bruce, A., Tait, J., & Meagher, L. (2011). Interdisciplinary Research Journeys. Bloomsbury Academic.CrossRefGoogle Scholar
McLeish, T., and Strang, V. (2016). Evaluating interdisciplinary research: The elephant in the peer-reviewers’ room. Palgrave Communications, 2, 16055.CrossRefGoogle Scholar
Newell, W. H. (2001). A theory of interdisciplinary studies. Issues in Integrative Studies, 19, 125.Google Scholar
Okamura, K. (2019). Interdisciplinarity revisited: Evidence for research impact and dynamism. Palgrave Communications, 5, 141.CrossRefGoogle Scholar
O’Rourke, M., Crowley, S., Eigenbrode, S. D., & Wulfhorst, J. D. (eds.). (2014). Enhancing Communication and Collaboration in Interdisciplinary Research. SAGE Publications.CrossRefGoogle Scholar
Repko, A. F., Newell, W. H., & Szostak, R. (eds.). (2012). Case Studies in Interdisciplinary Research. SAGE Publications.CrossRefGoogle Scholar
Repko, A., & Szostak, R. (2020). Interdisciplinary Research: Process and Theory, 4th ed. SAGE Publications.Google Scholar
Repko, A., Szostak, R., & Buchberger, R. (2020). Introduction to Interdisciplinary Studies. SAGE Publications.Google Scholar
Sternberg, R. J. (2006). The nature of creativity. Creativity Research Journal, 18(1), 8798.Google Scholar
Szostak, R. (2004). Classifying Science: Phenomena, Data, Theory, Method, Practice. Springer.Google Scholar
Szostak, R. (2012). The interdisciplinary research process. In Repko, A. F., Newell, W. H., & Szostak, R. (eds.), Case Studies in Interdisciplinary Research (pp. 319). SAGE Publications.CrossRefGoogle Scholar
Szostak, R. (2017a). Interdisciplinary research as a creative design process. In Darbellay, F., Moody, Z., & Lubart, T. (eds.), Creative Design Thinking from an Interdisciplinary Perspective (pp. 1733). Springer.CrossRefGoogle Scholar
Szostak, R. (2017b). Stability, instability, and interdisciplinarity. Issues in Interdisciplinary Studies, 35, 6587.Google Scholar
Szostak, R. (2022). Integrating the Human Sciences: Enhancing Cohesion and Progress across the Social Sciences and Humanities. Routledge.CrossRefGoogle Scholar
Szostak, R, Gnoli, C., & López-Huertas, M. (2016). Interdisciplinary Knowledge Organization. Springer.CrossRefGoogle Scholar
Ulibarri, N., Cravens, A. E., Nabergoj, A. S., Kernbach, S., & Royalty, A. (2019). Creativity in Research: Cultivate Clarity, Be Innovative, and Make Progress in your Research Journey. Cambridge University Press.CrossRefGoogle Scholar

References

Anderson, S. F. (2020). Misinterpreting p: The discrepancy between p values and the probability the null hypothesis is true, the influence of multiple testing, and implications for the replication crisis. Psychological Methods, 25(5), 596609. https://doi.org/10.1037/met0000248CrossRefGoogle ScholarPubMed
Anderson, S. F., & Maxwell, S. E. (2016). There’s more than one way to conduct a replication study: Beyond statistical significance. Psychological Methods, 21(1), 112. https://doi.org/10.1037/met0000051CrossRefGoogle ScholarPubMed
Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature News, 533(7604), 452454. https://philpapers.org/rec/BAKSL-2CrossRefGoogle ScholarPubMed
Bakker, M., Van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7(6), 543554. https://doi.org/10.1177/1745691612459060CrossRefGoogle ScholarPubMed
Begley, C. G., & Ellis, L. M. (2012). Raise standards for preclinical cancer research. Nature, 483(7391), 531533. https://doi.org/10.1038/483531aCrossRefGoogle ScholarPubMed
BITSS (Berkeley Initiative for Transparency in the Social Sciences). (2019, October 12). Resource Library. www.bitss.org/resource-library (retrieved December 16, 2022).Google Scholar
Bohannon, J. (2014). Psychology. Replication effort provokes praise – and “bullying” charges. Science, 344(6186), 788789.CrossRefGoogle ScholarPubMed
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., et al. (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217224. https://doi.org/10.1016/j.jesp.2013.10.005CrossRefGoogle Scholar
Crocker, J., & Cooper, M. L. (2011). Addressing scientific fraud. Science, 334(6060), 1182. https://doi.org/10.1126/science.1216775CrossRefGoogle ScholarPubMed
Cuccolo, K., Irgens, M. S., Zlokovich, M. S., Grahe, J., & Edlund, J. E. (2021). What crowdsourcing can offer to cross-cultural psychological science. Cross-Cultural Research, 55(1), 328. https://doi.org/10.1177/1069397120950628CrossRefGoogle Scholar
Cumming, G. (2013). Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. Routledge.CrossRefGoogle Scholar
Dafoe, A. (2014). Science deserves better: The imperative to share complete replication files. PS: Political Science & Politics, 47(1), 6066. https://doi.org/10.1017/S104909651300173XGoogle Scholar
Easley, R. W., Madden, C. S., & Gray, V. (2013). A tale of two cultures: Revisiting journal editors’ views of replication research. Journal of Business Research, 66(9), 14571459. https://doi.org/10.1016/j.jbusres.2012.05.013CrossRefGoogle Scholar
Ebersole, C. R., Atherton, O. E., Belanger, A. L., Skulborstad, H. M., Allen, J. M., Banks, J. B., & Nosek, B. A. (2016). Many Labs 3: Evaluating participant pool quality across the academic semester via replication. Journal of Experimental Social Psychology, 67, 6882. https://doi.org/10.1016/j.jesp.2015.10.012CrossRefGoogle Scholar
Ebersole, C. R., Mathur, M. B., Baranski, E., Bart-Plange, D. J., Buttrick, N. R., Chartier, C. R., et al. (2020). Many Labs 5: Testing pre-data-collection peer review as an intervention to increase replicability. Advances in Methods and Practices in Psychological Science, 3(3), 309331.CrossRefGoogle Scholar
Edlund, J. E., Cuccolo, K., Irgens, M. S., Wagge, J. R., & Zlokovich, M. S. (2022). Science through replication studies. Perspectives on Psychological Science, 17(1), 216225. https://doi.org/10.1177/1745691620984385CrossRefGoogle ScholarPubMed
Eitzel, M. V., Cappadonna, J. L., Santos-Lang, C., Duerr, R. E., Virapongse, A., West, S. E., et al. (2017). Citizen science terminology matters: Exploring key terms. Citizen Science: Theory and Practice, 2(1), 2. http://doi.org/10.5334/cstp.96Google Scholar
Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories: Publication bias and psychological science’s aversion to the null. Perspectives on Psychological Science, 7(6), 555561. https://doi.org/10.1177/1745691612459059CrossRefGoogle Scholar
Field, S. M., Hoekstra, R., Bringmann, L., & van Ravenzwaaij, D. (2019). When and why to replicate: As easy as 1, 2, 3? Collabra: Psychology, 5(1), 46. https://doi.org/10.1525/collabra.218CrossRefGoogle Scholar
Forstmeier, W., Wagenmakers, E.-J., & Parker, T. H. (2017). Detecting and avoiding likely false‐positive findings – a practical guide. Biological Reviews, 92(4), 19411968. https://doi.org/10.1111/brv.12315CrossRefGoogle ScholarPubMed
Gernsbacher, M. A. (2018). Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403414. https://doi.org/10.1177/2515245918754485CrossRefGoogle ScholarPubMed
Giner-Sorolla, R. (2012). Science or art? How aesthetic standards grease the way through the publication bottleneck but undermine science. Perspectives on Psychological Science, 7(6), 562571. https://doi.org/10.1177/1745691612457576CrossRefGoogle ScholarPubMed
Grahe, J. E., & Cuccolo, K. (2023, May 19). Replications of Willow in color. https://osf.io/deu8yGoogle Scholar
Grahe, J. E., Williams, K. D., & Hinsz, V. B. (2000). Teaching experimental methods while bringing smiles to your students’ faces. Teaching of Psychology, 27(2), 108111. https://doi.org/10.1207/S15328023TOP2702_06CrossRefGoogle Scholar
Hardwicke, T. E., Tessler, M. H., Peloquin, B. N., & Frank, M. C. (2018). A Bayesian decision-making framework for replication. Behavioral and Brain Sciences, 41, e132.CrossRefGoogle ScholarPubMed
Hedges, L. V. (2019). The statistics of replication. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 15(S1), 314. https://doi.org/10.1027/1614-2241/a000173CrossRefGoogle Scholar
Hedges, L. V., & Schauer, J. M. (2019). Statistical analyses for studying replication: Meta-analytic perspectives. Psychological Methods, 24(5), 557570. https://doi.org/10.1037/met0000189CrossRefGoogle ScholarPubMed
Hendriks, F., Kienhues, D., & Bromme, R. (2020). Replication crisis = trust crisis? The effect of successful vs failed replications on laypeople’s trust in researchers and research. Public Understanding of Science, 29(3), 270288. https://doi.org/10.1177/0963662520902383CrossRefGoogle ScholarPubMed
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). Most people are not WEIRD. Nature, 466(7302), 29. https://doi.org/10.1038/466029aCrossRefGoogle Scholar
Hesse, B. W. (2018). Can psychology walk the walk of open science? American Psychologist, 73(2), 126. https://doi.org/10.1037/amp0000197CrossRefGoogle ScholarPubMed
Hinsz, V. B., & Tomhave, J. A. (1991). Smile and (half) the world smiles with you, frown and you frown alone. Personality and Social Psychology Bulletin, 17(5), 586592. https://doi.org/10.1177/0146167291175014CrossRefGoogle Scholar
Honey-Rosés, J., Anguelovski, I., Chireh, V. K., Daher, C., Konijnendijk van den Bosch, C., Litt, J. S., et al. (2021). The impact of COVID-19 on public space: An early review of the emerging questions – design, perceptions and inequities. Cities & Health, 5(S1), S263S279. https://doi.org/10.1080/23748834.2020.1780074CrossRefGoogle Scholar
Ioannidis, J. P. (2005). Why most published research findings are false. PLOS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124CrossRefGoogle ScholarPubMed
Ioannidis, J. P. A., Kim, B. Y. S., & Trounson, A. (2018). How to design preclinical studies in nanomedicine and cell therapy to maximize the prospects of clinical translation. Nature Biomedical Engineering, 2, 797809. https://doi.org/10.1038/s41551-018-0314-yCrossRefGoogle ScholarPubMed
Islam, M. R. (2018). Sample size and its role in Central Limit Theorem (CLT). International Journal of Physics and Mathematics, 1(1), 3747. https://doi.org/10.31295/ijpm.v1n1.42Google Scholar
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524532. https://doi.org/10.1177/0956797611430953CrossRefGoogle ScholarPubMed
Kass-Hout, T. A., Xu, Z., Mohebbi, M., Nelsen, H., Baker, A., Levine, J., et al. (2016). OpenFDA: An innovative platform providing access to a wealth of FDA’s publicly available data. Journal of the American Medical Informatics Association, 23(3), 596600. https://doi.org/10.1093/jamia/ocv153CrossRefGoogle ScholarPubMed
Kelly, A. E., & Cuccolo, K. (2022). Supporting college students during times of transition: Pedagogical recommendations based on pandemic learning data. College Teaching, 72(1), 1527. https://doi.org/10.1080/87567555.2022.2071825CrossRefGoogle Scholar
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196217. https://doi.org/10.1207/s15327957pspr0203_4CrossRefGoogle ScholarPubMed
Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L. S., et al. (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLOS Biology, 14(5), e1002456. https://doi.org/10.1371/journal.pbio.1002456CrossRefGoogle ScholarPubMed
Klein, R. A., Cook, C. L., Ebersole, C. R., Vitiello, C., Nosek, B. A., Hilgard, J., et al. (2022). Many Labs 4: Failure to replicate mortality salience effect with and without original author involvement. Collabra: Psychology, 8(1), 35271. https://doi.org/10.1525/collabra.35271CrossRefGoogle Scholar
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R.B. Jr, Bahník, Š., Bernstein, M. J., et al. (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45(3), 142152. https://doi.org/10.1027/1864-9335/a000178CrossRefGoogle Scholar
Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B. Jr, Alper, S., et al. (2018). Many Labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1(4), 443490. https://doi.org/10.1177/2515245918810225CrossRefGoogle Scholar
Kraus, M. W. (2013). Look everyone: A social priming finding with direct replications! Psych Your Mind [blog], November 6. https://psych-your-mind.blogspot.com/2013/11/look-everyone-social-priming-finding.htmlGoogle Scholar
Kraus, M. W. (2015). Americans still overestimate social class mobility: A pre-registered self-replication. Frontiers in Psychology, 6, 1709. https://doi.org/10.3389/fpsyg.2015.01709CrossRefGoogle ScholarPubMed
Kraus, M. W., & Tan, J. J. (2015). Americans overestimate social class mobility. Journal of Experimental Social Psychology, 58, 101111. https://doi.org/10.1016/j.jesp.2015.01.005CrossRefGoogle Scholar
Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863. https://doi.org/10.3389/fpsyg.2013.00863CrossRefGoogle Scholar
Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1(2), 259269. https://doi.org/10.1177/2515245918770963CrossRefGoogle Scholar
Lehrer, J. (2011). Trials and errors: Why science is failing us. Wired, December 16. www.wired.com/2011/12/ff-causationGoogle Scholar
Loken, E., & Gelman, A. (2017). Measurement error and the replication crisis. Science, 355(6325), 584585. www.science.org/doi/full/10.1126/science.aal3618CrossRefGoogle ScholarPubMed
Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., et al. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501515. https://doi.org/10.1177/2515245918797607CrossRefGoogle ScholarPubMed
Napoli, P. M., & Karaganis, J. (2010). On making public policy with publicly available data: The case of US communications policymaking. Government Information Quarterly, 27(4), 384391. https://doi.org/10.1016/j.giq.2010.06.005CrossRefGoogle Scholar
Neuliep, J. W. (1990). Editorial bias against replication research. Journal of Social Behavior and Personality, 5(4), 8590.Google Scholar
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., et al. (2015). Scientific standards: Promoting an open research culture. Science, 348(6242), 14221425. https://doi.org/10.1126/science.aab2374CrossRefGoogle ScholarPubMed
Nosek, B. A., & Bar-Anan, Y. (2012). Scientific Utopia: I. Opening scientific communication. Psychological Inquiry, 23(3), 217243. https://doi.org/10.1080/1047840X.2012.692215CrossRefGoogle Scholar
Nosek, B. A., & Errington, T. M. (2020). What is replication? PLOS Biology, 18(3), e3000691. https://doi.org/10.1371/journal.pbio.3000691CrossRefGoogle ScholarPubMed
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615631.CrossRefGoogle ScholarPubMed
Open Science Collaboration. (2015). Psychology. Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.CrossRefGoogle Scholar
Pashler, H., & Wagenmakers, E.-J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence? Perspectives on Psychological Science, 7(6), 528530. https://doi.org/10.1177/1745691612465253CrossRefGoogle Scholar
Patterson, M. L. (2008). Back to social behavior: Mining the mundane. Basic and Applied Social Psychology, 30(2), 93101. https://doi.org/10.1080/01973530802208816CrossRefGoogle Scholar
Patterson, M. L., & Tubbs, M. E. (2005). Through a glass darkly: Effects of smiling and visibility on recognition and avoidance in passing encounters. Western Journal of Communication, 69(3), 219231. https://doi.org/10.1080/10570310500202389CrossRefGoogle Scholar
Popper, K. R. (1963). Science as falsification. Conjectures and Refutations, 1, 3339.Google Scholar
Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: How much can we rely on published data on potential drug targets? Nature Reviews Drug Discovery, 10 (712). https://doi.org/10.1038/nrd3439-c1CrossRefGoogle ScholarPubMed
Rahman, M. K., Gazi, M. A. I., Bhuiyan, M. A., & Rahaman, M. A. (2021). Effect of Covid-19 pandemic on tourist travel risk and management perceptions. PLOS ONE, 16(9), e0256486. https://doi.org/10.1371/journal.pone.0256486CrossRefGoogle ScholarPubMed
Rohlfing, I., & Zuber, C. I. (2021). Check your truth conditions! Clarifying the relationship between theories of causation and social science methods for causal inference. Sociological Methods & Research, 50(4), 16231659. https://doi.org/10.1177/0049124119826156CrossRefGoogle Scholar
Rosenthal, R. (1979). The File Drawer Problem and tolerance for null results. Psychological Bulletin, 86(3), 638641. https://doi.org/10.1037/0033-2909.86.3.638CrossRefGoogle Scholar
Rouse, S. V. (2017). The red badge of research (and the yellow, blue, and green badges, too). Psi Chi Journal of Psychological Research, 22(1), 29.CrossRefGoogle Scholar
Savage, C. J., & Vickers, A. J. (2009). Empirical study of data sharing by authors publishing in PLOS journals. PLOS ONE, 4(9), e7078. https://doi.org/10.1371/journal.pone.0007078CrossRefGoogle ScholarPubMed
Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13(2), 90100. https://doi.org/10.1037/a0015108CrossRefGoogle Scholar
Schooler, J. (2011). Unpublished results hide the decline effect. Nature, 470(7335), 437438. https://doi.org/10.1038/470437aCrossRefGoogle ScholarPubMed
School Spirit Study Group, Sandra, A. H., Rowatt, T., Brooks, L., Magid, V., Stage, R., et al. (2004). Measuring school spirit: A national teaching exercise: The School Spirit Study Group. Teaching of Psychology, 31(1), 1821. https://doi.org/10.1207/s15328023top3101_5Google Scholar
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 13591366. https://doi.org/10.1177/0956797611417632CrossRefGoogle ScholarPubMed
Simons, D. J. (2014). The value of direct replication. Perspectives on Psychological Science, 9(1), 7680. https://doi.org/10.1177/1745691613514755CrossRefGoogle ScholarPubMed
Simons, D. J., Holcombe, A. O., & Spellman, B. A. (2014). An introduction to registered replication reports at Perspectives on Psychological Science. Perspectives on Psychological Science, 9(5), 552555. https://doi.org/10.1177/1745691614543974CrossRefGoogle ScholarPubMed
Smith, R. J. (2018). The continuing misuse of null hypothesis significance testing in biological anthropology. American Journal of Physical Anthropology, 166(1), 236245. https://doi.org/10.1002/ajpa.23399CrossRefGoogle ScholarPubMed
Spellman, B. A. (2012). Introduction to the special section on research practices. Perspectives on Psychological Science, 7(6), 655656. https://doi.org/10.1177/1745691612465075CrossRefGoogle Scholar
Świątkowski, W., & Dompnier, B. (2017). Replicability crisis in social psychology: Looking at the past to find new pathways for the future. International Review of Social Psychology, 30(1), 111124. https://doi.org/10.5334/irsp.66CrossRefGoogle Scholar
Uhlmann, E. L., Ebersole, C. R., Chartier, C. R., Errington, T. M., Kidwell, M. C., Lai, C. K., et al. (2019). Scientific Utopia: III. Crowdsourcing science. Perspectives on Psychological Science, 14(5), 711733. https://doi.org/10.1177/1745691619850561CrossRefGoogle ScholarPubMed
Vindrola-Padros, C., Andrews, L., Dowrick, A., Djellouli, N., Fillmore, H., Gonzalez, E. B., et al. (2020). Perceptions and experiences of healthcare workers during the COVID-19 pandemic in the UK. BMJ Open, 10(11), e040503. http://dx.doi.org/10.1136/bmjopen-2020-040503CrossRefGoogle ScholarPubMed
Wagenmakers, E.-J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., Adams, R. B. Jr, et al. (2016). Registered replication report: Strack, Martin, & Stepper (1988). Perspectives on Psychological Science, 11(6), 917928. https://doi.org/10.1177/1745691616674458CrossRefGoogle Scholar
Wagge, J. R., Brandt, M. J., Lazarevic, L. B., Legate, N., Christopherson, C., Wiggins, B., & Grahe, J. E. (2019). Publishing research with undergraduate students via replication work: The collaborative replications and education project. Frontiers in Psychology, 10, 247. https://doi.org/10.3389/fpsyg.2019.00247CrossRefGoogle Scholar
Wallach, J. D., Boyack, K. W., & Ioannidis, J. P. (2018). Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017. PLOS Biology, 16(11), e2006930. https://doi.org/10.1371/journal.pbio.2006930CrossRefGoogle ScholarPubMed
Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in psychology, 7. https://doi.org/10.3389/fpsyg.2016.01832CrossRefGoogle ScholarPubMed
Wiggins, B. J., & Christopherson, C. D. (2019). The replication crisis in psychology: An overview for theoretical and philosophical psychology. Journal of Theoretical and Philosophical Psychology, 39(4), 202217. https://doi.org/10.1037/teo0000137CrossRefGoogle Scholar
Wingen, T., Berkessel, J. B., & Englich, B. (2020). No replication, no trust? How low replicability influences trust in psychology. Social Psychological and Personality Science, 11(4), 454463. https://doi.org/10.1177/1948550619877412CrossRefGoogle Scholar
Zajonc, R. B., Murphy, S. T., & Inglehart, M. (1989). Feeling and facial efference: Implications of the vascular theory of emotion. Psychological Review, 96(3), 395416. https://doi.org/10.1037/0033-295X.96.3.395CrossRefGoogle ScholarPubMed
Zhang, X., Astivia, O. L. O., Kroc, E., & Zumbo, B. D. (2022). How to think clearly about the Central Limit Theorem. Psychological Methods, 28(6), 14271445. https://doi.org/10.1037/met0000448CrossRefGoogle ScholarPubMed
Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2018). Making replication mainstream. Behavioral and Brain Sciences, 41, 3120. https://doi.org/10.1017/S0140525X17001972CrossRefGoogle ScholarPubMed

References

Alger, B. E. (2019). Defense of the Scientific Hypothesis: From Reproducibility Crisis to Big Data. Oxford University Press.CrossRefGoogle Scholar
Allen, C., & Mehler, D. M. A. (2019). Open science challenges, benefits and tips in early career and beyond. PLOS Biology, 17(5), e3000246.CrossRefGoogle ScholarPubMed
Alogna, V. K., Attaya, M. K., Aucoin, P., Bahnik, S., Birch, S., Birt, A. R., et al. (2014). Registered replication report: Schooler & Engstler-Schooler (1990). Perspectives on Psychological Science, 9, 556578.CrossRefGoogle ScholarPubMed
Baribault, B., Donkin, C., Little, D. R., Trueblood, J., Oravecz, Z., van Ravenzwaaij, D., et al. (2018). Metastudies for robust tests of theory. Proceedings of the National Academy of Sciences, 115, 26072612.CrossRefGoogle ScholarPubMed
Baumeister, R. F. (2016). Charting the future of social psychology on stormy seas: Winners, losers, and recommendations. Journal of Experimental Social Psychology, 66, 153158CrossRefGoogle Scholar
Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, R. B., Bollen, K. A., et al. (2017). Redefine statistical signfiicance. Nature Human Behaviour, 2, 610. https://doi.org/10.1038/s41562-017-0189-zCrossRefGoogle Scholar
Braithwaite, V. (2010). Do Fish Feel Pain? Oxford University Press.Google Scholar
Carney, D. R., Cuddy, A. J. C., & Yap, A. J. (2010). Power posing: Brief nonverbal displays affect neuroendocrine levels and risk tolerance. Psychological Science, 21, 13631368.CrossRefGoogle ScholarPubMed
Carney, D. R., Cuddy, A. J. C., & Yap, A. J. (2015). Review and summary of research on the embodied effects of expansive (vs. contractive) nonverbal displays. Psychological Science, 26, 657663.CrossRefGoogle ScholarPubMed
Carp, J. (2012). On the plurality of (methodological) worlds: Estimating the analytic flexibility of fMRI experiments. Frontiers in Neuroscience, 6. http://doi.org/10.3389/fnins.2012.00149CrossRefGoogle ScholarPubMed
Carroll, C., Patterson, M., Wood, S. Booth, A., Rick, J., & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Science, 2, 40. https://doi.org/10.1186/1748-5908-2-40CrossRefGoogle ScholarPubMed
Center for Open Science. (2023). Registered Reports: Peer review before results are known to align scientific values and practices [webpage]. www.cos.io/initiatives/registered-reports (viewed March 16, 2023.Google Scholar
Cesario, J., Jonas, K. J., & Carney, D. R. (2017). CRSP special issue on power poses: What was the point and what did we learn? Comprehensive Results in Social Psychology, 2(1), 15. https://doi.org/10.1080/23743603.2017.1309876CrossRefGoogle Scholar
Chambers, C. D. (2013). Registered reports: A new publishing initiative at Cortex. Cortex, 49, 609610.CrossRefGoogle ScholarPubMed
Chambers, C. D. (2019). What’s next for registered reports? Nature, 573(7773), 187189.CrossRefGoogle ScholarPubMed
Chambers, C. D., & Tzavella, L. (2020).The past, present and future of Registered Reports. MetaArXiv Preprints. https://doi.org/10.31222/osf.io/43298CrossRefGoogle Scholar
Chavalarias, D., & Ioannidis, J. P. A. (2010). Science mapping analysis characterizes 235 biases in biomedical research. Journal of Clinical Epidemiology, 63, 12051215.CrossRefGoogle ScholarPubMed
Chuard, P. J. C, Vrtílek, M., Head, M. L., & Jennions, M. D. (2019). Evidence that nonsignificant results are sometimes preferred: Reverse P-hacking or selective reporting? PLOS Biology, 17(1): e3000127. https://doi.org/10.1371/journal.pbio.3000127CrossRefGoogle ScholarPubMed
Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11, 227268.CrossRefGoogle Scholar
Deutsch, D. (2011). The Beginning of Infinity: Explanations that Transform the World. Penguin.Google Scholar
Devezer, B., Navarro, D. J., Vandekerckhove, J., & Buzbas, E. O. (2021). The case for formal methodology in scientific reform. Royal Society Open Science, 8, 200805. https://doi.org/10.1098/rsos.200805CrossRefGoogle ScholarPubMed
Dienes, Z. (2016). How Bayes factors change scientific practice. Journal of Mathematical Psychology, 72, 7889.CrossRefGoogle Scholar
Dienes, Z. (2019). How do I know what my theory predicts? Advances in Methods and Practices in Psychological Science, 2, 364377. https://doi.org/10.1177/2515245919876960CrossRefGoogle Scholar
Dienes, Z. (2021a). Obtaining evidence for no effect. Collabra: Psychology, 7(1), 28202. https://doi.org/10.1525/collabra.28202CrossRefGoogle Scholar
Dienes, Z. (2021b). How to use and report Bayesian hypothesis tests. Psychology of Consciousness: Theory, Research, and Practice, 8, 926.Google Scholar
Dienes, Z. (2023). Testing theories with Bayes factors. In Nichols, A. L. & Edlund, J. E. (eds.), Cambridge Handbook of Research Methods and Statistics for the Social and Behavioral Sciences (vol. 1, pp. 494512). Cambridge University Press.CrossRefGoogle Scholar
Dienes, Z., Palfi, B., & Lush, P. (2022). Controlling phenomenology by being unaware of intentions. In Weisberg, J. (ed.), Qualitative Consciousness: Themes from the Philosophy of David Rosenthal (pp. 229242). Cambridge University Press.CrossRefGoogle Scholar
Dutilh, G., Vandekerckhove, J., Ly, A., Matzke, D., Pedroni, A., Frey, R., et al. (2017). A test of the diffusion model explanation for the worst performance rule using preregistration and blinding. Attention, Perception, & Psychophysics, 79, 713725. https://doi.org/10.3758/s13414-017-1304-yCrossRefGoogle ScholarPubMed
Fiedler, K. (2017). What constitutes strong psychological science? The (neglected) role of diagnosticity and a priori theorizing. Perspectives on Psychological Science, 12, 4661.CrossRefGoogle Scholar
Flake, J. K., Davidson, I. J., Wong, O., & Pek, J. (2022). Construct validity and the validity of replication studies: A systematic review. American Psychologist, 77(4), 576588.CrossRefGoogle ScholarPubMed
Flake, J. K., & Fried, E. I. (2019). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456465. https://doi.org/10.1177/2515245920952393CrossRefGoogle Scholar
Francis, G. (2012). Too good to be true: Publication bias in two prominent studies from experimental psychology. Psychonomic Bulletin & Review, 19, 151156.CrossRefGoogle ScholarPubMed
Friese, M., & Frankenbach, J. (2020). p-Hacking and publication bias interact to distort meta-analytic effect size estimates. Psychological Methods, 25(4), 456471.CrossRefGoogle ScholarPubMed
Gelman, A., & Loken, E. (2014). The statistical crisis in science: Data-dependent analysis – a “garden of forking paths” – explains why many statistically significant comparisons don’t hold up. American Scientist, 102(6), 460. https://doi.org/10.1511/2014.111.460CrossRefGoogle Scholar
Goldacre, B, DeVito, N. J., Heneghan, C., Irving, F., Bacon, S., Fleminger, J., & Curtis, H. (2018). Compliance with requirement to report results on the EU Clinical Trials Register: Cohort study and web resource. BMJ, 362, k3218.CrossRefGoogle ScholarPubMed
Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82(1), 120. https://doi.org/10.1037/h0076157CrossRefGoogle Scholar
Gronau, Q. F., van Erp, S., Heck, D. A., Cesario, J., Jonas, K. J., & Wagenmakers, E.-J. (2017). Bayesian model-averaged meta-analysis of the power pose effect with informed and default priors: The case of felt power. Comprehensive Results in Social Psychology, 2, 123138.CrossRefGoogle Scholar
Guest, O., & Martin, A. E. (2021). How computational modeling can force theory building in psychological science. Perspectives on Psychological Science, 16(4), 789802. https://doi.org/10.1177/1745691620970585CrossRefGoogle ScholarPubMed
Haig, B. D. (2014). Investigating the Psychological World: Scientific Method in the Behavioural Sciences. MIT Press.CrossRefGoogle Scholar
Ikeda, A., Xu, H., Fuji, N., Zhu, S., & Yamada, Y. (2019). Questionable research practices following pre-registration. Japanese Psychological Review, 62(3), 281295.Google Scholar
Jeffreys, H. (1939). The Theory of Probability. Oxford University Press.Google Scholar
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality & Social Psychology Review, 2, 196217.CrossRefGoogle ScholarPubMed
Khan, K. A., & Cheung, P. (2020). Presence of mismatches between diagnostic PCR assays and coronavirus SARS-CoV-2 genome. Royal Society Open Science, 7, 200636. http://doi.org/10.1098/rsos.200636CrossRefGoogle ScholarPubMed
Klein, S. B. (2014). What can recent replication failures tell us about the theoretical commitments of psychology?Theory & Psychology, 24(3), 326338. https://doi.org/10.1177/0959354314529616CrossRefGoogle Scholar
Kruschke, J. K. (2018). Rejecting or accepting parameter values in Bayesian estimation. Advances in Methods and Practices in Psychological Science, 1, 270280.CrossRefGoogle Scholar
Lakatos, I. (1970). Falsification and the methodology of scientific research programmes. In Lakatos, I. & Musgrave, A. (eds.), Criticism and the Growth of Knowledge (pp. 91196). Cambridge University Press.CrossRefGoogle Scholar
MacCoun, R., & Perlmutter, S. (2015). Hide results to seek the truth. Nature, 526, 187189.CrossRefGoogle ScholarPubMed
Mayo, D. G. (2018). Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars. Cambridge University press.CrossRefGoogle Scholar
McIntosh, R. D. (2017). Exploratory reports: A new article type for Cortex. Cortex, 96, A1A4.CrossRefGoogle ScholarPubMed
Nature Human Behaviour. (2020). Registered Reports [webpage]. www.nature.com/nathumbehav/submission-guidelines/registeredreports (retrieved April 20, 2020).Google Scholar
Newman, J., & Taylor, A. (1992). Effect of a means-end contingency on young children’s food preferences. Journal of Experimental Child Psychology, 64, 200216.CrossRefGoogle Scholar
Obels, P., Lakens, D., Coles, N. A., Gottfried, J., & Green, S. A. (2020). Analysis of open data and computational reproducibility in Registered Reports in psychology. Advances in Methods and Practices in Psychological Science, 3(2), 229237. https://doi.org/10.1177/2515245920918872CrossRefGoogle Scholar
Palfi, B., & Dienes, Z. (2019a). When and how to calculate the Bayes factor with an interval null hypothesis. PsyArXiv Preprints. https://doi.org/10.31234/osf.io/9chmwCrossRefGoogle Scholar
Palfi, B., & Dienes, Z. (2019b). The role of Bayes factors in testing interactions. PsyArXiv Preprints. https://doi.org/10.31234/osf.io/qjrg4CrossRefGoogle Scholar
Palfi, B., & Dienes, Z. (2020). Why Bayesian “evidence for H1” in one condition and “evidence for H0” in another does not mean Bayesian evidence for a difference between conditions. Advances in Methods and Practices in Psychological Science, 3, 300308.CrossRefGoogle Scholar
PCI RR (Peer Community In Registered Reports). (2023). Guide for Authors [webpage]. https://rr.peercommunityin.org/help/guide_for_authorsGoogle Scholar
Popper, K. R. (1959). The Logic of Scientific Discovery. Hutchinson.Google Scholar
Popper, K. R. (1963). Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge & Kegan Paul.Google Scholar
Popper, K. R. (1972). Objective Knowledge: An Evolutionary Approach. Oxford University Press.Google Scholar
Ritchie, S. (2020). Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. Metropolitan Books.Google Scholar
Royal Society Open Science. (2020). Registered Reports [webpage]. https://royalsocietypublishing.org/rsos/registered-reports (retrieved April 20, 2020).Google Scholar
Scheel, A. M., Schijen, M., & Lakens, D. (2020). An excess of positive results: Comparing the standard psychology literature with Registered Reports. Advances in Methods and Practices in Psychological Science, 4(2). https://doi.org/10.1177/25152459211007467Google Scholar
Schönbrodt, F. D., & Wagenmakers, E.-J. (2018). Bayes factor design analysis: Planning for compelling evidence. Psychonomic Bulletin & Review, 25, 128142.CrossRefGoogle ScholarPubMed
Schönbrodt, F. D., Wagenmakers, E.-J., Zehetleitner, M., & Perugini, M. (2017). Sequential hypothesis testing with Bayes factors: Efficiently testing mean differences. Psychological Methods, 22, 322339.CrossRefGoogle ScholarPubMed
Schooler, J. W., & Engstler-Schooler, T. Y. (1990). Verbal overshadowing of visual memories: Some things are better left unsaid. Cognitive Psychology, 22, 3671.CrossRefGoogle ScholarPubMed
Simmons, J., Nelson, L., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allow presenting anything as significant. Psychological Science, 22, 13591366.CrossRefGoogle ScholarPubMed
Simmons, J., & Simonsohn, U. (2015, May). Power posing: Reassessing the evidence behind the most popular TED talk. Data Colada, May 8. http://datacolada.org/2015/05/08/37-power-posing-reassessing-the-evidence-behind-the-most-popular-ted-talkGoogle Scholar
Simons, D. J., Holcombe, A. O., & Spellman, B. A. (2014). An introduction to Registered Replication Reports at Perspectives on Psychological Science. Perspectives on Psychological Science, 9, 552555.CrossRefGoogle ScholarPubMed
Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2019). Specification curve: Descriptive and inferential statistics on all reasonable specifications. SSRN. http://doi.org/10.2139/ssrn.2694998CrossRefGoogle Scholar
Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J., Thorn, F. S., Vazire, S., et al. (2021). Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 5(8), 990997.CrossRefGoogle ScholarPubMed
Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing transparency through a multiverse analysis. Perspectives on Psychological Science, 11, 702712.CrossRefGoogle ScholarPubMed
Wagenmakers, E.-J., Verhagen, A. J., Ly, A., Matzke, D., Steingroever, H., Rouder, J . N., & Morey, R. D. (2017). The need for Bayesian hypothesis testing in psychological science. In Lilienfeld, S. & Waldman, I. (eds.), Psychological Science under Scrutiny: Recent Challenges and Proposed Solutions (pp. 123138). John Wiley and Sons.CrossRefGoogle Scholar
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 627633.CrossRefGoogle ScholarPubMed
Westlake, W. J. (1972). Use of confidence intervals in analysis of comparative bioavailability trials. Journal of Pharmaceutical Sciences, 61, 13401341. https://doi.org/10.1002/jps.2600610845CrossRefGoogle ScholarPubMed
Yanai, I., & Lercher, M. (2020). A hypothesis is a liability. Genome Biology, 21, 231. https://doi.org/10.1186/s13059-020-02133-wCrossRefGoogle ScholarPubMed

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×