Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-28T20:47:52.480Z Has data issue: false hasContentIssue false

The Use of the Effect Size in JCR Spanish Journals of Psychology: From Theory to Fact

Published online by Cambridge University Press:  10 January 2013

Juan García García*
Affiliation:
Universidad de Almería (Spain)
Elena Ortega Campos
Affiliation:
Universidad de Almería (Spain)
Leticia De la Fuente Sánchez
Affiliation:
Universidad de Almería (Spain)
*
Correspondence concerning this article should be addressed to Juan García García. Facultad de Psicología, Universidad de Almería. Cañada de San Urbano s/n. 04120 Almería (Spain). E-mail: jgarciag@ual.es

Abstract

In 1999, Wilkinson and the Task Force on Statistical Inference published “Statistical Methods and Psychology: Guidelines and Explanation.” The authors made several recommendations about how to improve the quality of Psychology research papers. One of these was to report some effect-size index in the results of the research. In 2001, the fifth edition of the Publication Manual of the American Psychological Association included this recommendation. In Spain, in 2003, scientific journals like Psicothema or the International Journal of Clinical and Health Psychology (IJCHP) published editorials and papers expressing the need to calculate the effect size in the research papers. The aim of this study is to determine whether the papers published from 2003 to 2008 in the four Spanish journals indexed in the Journal Citation Reports have reported some effect-size index of their results. The findings indicate that, in general, the follow-up of the norm has been scanty, though the evolution over the analyzed period is different depending on the journal.

Con la publicación, en 1999, del informe Statistical Methods in Psychology Journals: Guidelines and explanations (Wilkinson & The Task Force on Statistical Inference) se hicieron públicas una serie de recomendaciones para mejorar la calidad en la presentación metodológica de los artículos de investigación en Psicología, siendo una de ellas el cálculo de un estadístico adecuado para valorar el tamaño del efecto en la presentación de los resultados. En 2001, la quinta edición del Manual de Publicación de la Asociación Americana de Psicología ha incluido esta recomendación. En España, en 2003, revistas científicas como Psicothema o el International Journal of Clinical and Health Psychology (IJCHP) publican editoriales y guías en las que expresan la necesidad de calcular el tamaño del efecto en los trabajos de investigación. El objetivo de este estudio es determinar si los artículos publicados desde 2003 hasta 2008 en las cuatro revistas españolas indizadas en el Journal Citation Reports han informado de algún índice del tamaño del efecto en los artículos publicados. Los resultados indican que, en general, el seguimiento de la norma ha sido escaso, aunque la evolución durante el período analizado es diferente dependiendo de la revista.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

American Psychological Association. (1994). Publication Manual of the American Psychological Association (4th ed). Washington, DC: Author.Google Scholar
American Psychological Association. (2001). Publication Manual of the American Psychological Association (5th ed). Washington, DC: APA.Google Scholar
American Psychological Association. (2009). Publication Manual of the American Psychological Association (6th ed). Washington, DC: APA.Google Scholar
Bezeau, S., & Graves, R. (2001). Statistical power and effect sizes of clinical neuropsychology research. Journal of Clinical and Experimental Neuropsychology, 23, 399406. doi:10.1076/jcen.23.3.399.1181Google Scholar
Bobenrieth, M. A. (2002). Normas para revisión de artículos originales en Ciencias de la Salud [Review guidelines of original articles on Health Sciences]. International Journal of Clinical and Health Psychology, 2, 509523.Google Scholar
Bono, R., & Arnau, J. (1995). Consideraciones generales en torno a los estudios de potencia [General considerations about statistical power studies]. Anales de Psicología, 11, 193202.Google Scholar
Borges, A., San Luis, C., Sánchez, J. A., & Cañadas, I. (2001). El juicio contra la hipótesis nula: muchos testigos y una sentencia virtuosa [The judgment against null hypothesis. Many witnesses and a virtuous sentence]. Psicothema, 13, 173178.Google Scholar
Botella, J., & Gambara, H. (2006). Doing and reporting a meta-analysis. International Journal of Clinical and Health Psychology, 6, 425440.Google Scholar
Carver, R. P. (1978). The case against statistical significance testing. Harvard Educational Review, 48, 378399.Google Scholar
Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 13041312. doi:10.1037//0003-066X.45.12.1304CrossRefGoogle Scholar
Cohen, J. (1992). Cosas que he aprendido (hasta ahora) [Things I have learned (so far)]. Anales de Psicología, 8(1–2), 318.Google Scholar
Crosby, R. D., Wonderlich, S. A., Mitchell, J. E., De Zwaan, M., Engel, S. G., Connolly, K.,… Taheri, M. (2006). An empirical analysis of eating disorders and anxiety disorders publications (1980-2000) – Part II: Statistical hypothesis testing. International Journal of Eating Disorders, 39, 4954.Google Scholar
Cumming, G., Fidler, F., Leonard, M., Kalinowski, P., Christiansen, A., Kleinig, A.,… Wilson, S. (2007). Statistical reform in psychology: Is anything changing? Psychological Science, 18, 230232. doi:10.1111/j.1467-9280.2007.01881.xGoogle Scholar
Faulkner, C., Fidler, F., & Cumming, G. (2008). The value of RCT evidence depends on the quality of statistical analysis. Behavior Research and Therapy, 46, 270281. doi:10.1016/j.brat.2007.12.001Google Scholar
Fidler, F. (2002). The Fifth edition of the APA Publication Manual: Why its statistics recommendations are so controversial. Educational and Psychological Measurement, 62, 749770. doi:10.1177/001316402236876Google Scholar
Fidler, F., Cumming, G., Thomason, N., Pannuzzo, D., Smith, J., Fyffe, P.,… Schmit, R. (2005). Toward improved statistical reporting in the Journal of Consulting and Clinical Psychology. Journal of Consulting and Clinical Psychology, 73, 136143. doi:10.1037/0022-006X.73.1.136CrossRefGoogle ScholarPubMed
Frías, M. D., Pascual, J., & García, F. (2000). Tamaño del efecto del tratamiento y significación estadística. [Effect size and statistical significance]. Psicothema, 12, 236240.Google Scholar
García, J., Ortega, E., & De la Fuente, L. (2008). Tamaño del Efecto en las revistas de psicología indizadas en Redalyc [Effect size in psychology journals indexed in redalyc]. Informes Psicológicos, 10, 173188.Google Scholar
Green, B. F., & Hall, J. A. (1984). Quantitative Methods for Literature Reviews. Annual Review of Psychology, 35, 3754. doi:10.1146/annurev.ps.35.020184.000345Google Scholar
Hunter, J. E. (1979, September). Cumulating results across studies: A critique of factor analysis, canonical correlation, MANOVA and statistical significance testing. Invited address presented at the 86th Annual Convention of the American Psychological Association. New York, NY.Google Scholar
Jones, L. V. (1955). Statistics and research design. Annual Review of Psychology, 6, 405430. doi:10.1146/annurev.ps.06.020155.002201CrossRefGoogle Scholar
Killeen, P. R. (2008). A Rational Foundation for Scientific Decisions: The Case for the Probability of Replication Statistic. In Osborne, J. W. (Ed). Best practices in quantitative methods (pp. 103124). Thousand Oaks, CA: Sage.Google Scholar
Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56, 746759. doi:10.1177/0013164496056005002Google Scholar
Kish, L. (1959). Some statistical problems in research design. American Sociological Review, 24, 328338. doi:10.2307/2089381Google Scholar
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. (Applied Social Research Methods Series, Vol. 49). Thousand Oaks, CA: Sage.Google Scholar
Monterde, H., Pascual, J., & Frías, M. D. (2006). Errores de interpretación de los métodos estadísticos: importancia y recomendaciones [Interpretation mistakes in statistical methods: Their importance and some recommendations]. Psicothema, 18, 848856.Google Scholar
Nelson, N., Rosenthal, R., & Rosnow, R. L. (1986). Interpretation of significance levels and effect sizes by psychological researchers. American Psychologist, 41, 12991301. doi:10.1037//0003-066X.41.11.1299CrossRefGoogle Scholar
Oakes, M. L. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York, NY: Wiley.Google Scholar
Osborne, J. W. (2008). Sweating the small stuff in educational psychology: How effect size and power reporting failed to change from 1969 to 1999, and what that means for the future of changing practices. Educational Psychology, 28, 151160. doi:10.1080/01443410701491718Google Scholar
Pascual, J., Frías, M. D., & Monterde-Bort, H. (2004). Tratamientos psicológicos con apoyo empírico y práctica clínica basada en la evidencia. [Psychological treatments with empirical evidence and evidence-based practice]. Papeles del Psicólogo, 87, 18.Google Scholar
Pascual, J., García, J., & Frías, M. D. (2000). Significación estadística, importancia del efecto y replicabilidad de los datos. [Statistical significance and replicability of the data]. Psicothema, 12, 408412.Google Scholar
Ramos-álvarez, M. M., & Catena, A. (2003). Normas para la elaboración y revisión de artículos originales experimentales en Ciencias del Comportamiento. [Criteria of the peer review process for publication of experimental research in Behavioral Sciences]. Internacional Journal of Clinical and Health Psychology, 4, 173189.Google Scholar
Rozeboom, W. W. (1960). The fallacy of the null-hypothesis significance test. Psychological Bulletin, 57, 416428. doi:10.1037/h0042040Google Scholar
Sánchez-Meca, J., Boruch, R. F., Petrosino, A., & Rosa, A. I. (2002). La colaboración Campbell y la práctica basada en la evidencia. [The Campbell collaboration and evidence-based practic]. Papeles del Psicólogo, 83, 4448.Google Scholar
Thompson, B. (1993). The use of statistical significance tests in research: Bootstrap and other alternatives. Journal of Experimental Education, 61, 361377.Google Scholar
Thompson, B. (1999). Statistical significance test, effect size reporting, and the vain pursuit of pseudo-objectivity. Theory & Psychology, 9, 191196. doi:10.1177/095935439992007Google Scholar
Thompson, B. & Snyder, P. A. (1998). Statistical significance testing and reliability analyses in recent JCD research articles. Journal of Counselling and Development, 76, 436441.CrossRefGoogle Scholar
Vacha-Haase, T., & Ness, C. M. (1999). Statistical significance testing as it relates to practice: Use within Professional Psychology: Research and Practice. Professional Psychology: Research and Practice, 30, 104105. doi:10.1037//0735-7028.30.1.104CrossRefGoogle Scholar
Vacha-Haase, T., & Nilsson, J. E. (1998). Further comments on statistical significance tests. Measurement and Evaluation in Counseling and Development, 31, 6367.CrossRefGoogle Scholar
Vacha-Haase, T., Nilsson, J. E., Reetz, D. R., Lance., T. S., & Thompson, B. (2000). Reporting practices and APA editorial policies regarding statistical significance and effect size. Theory & Psychology, 10, 413425. doi:10.1177/0959354300103006Google Scholar
Velicer, W. F., Cumming, G., Fava, J. L., Rossi, J. S., Prochaska, J. O., & Johnson, J. (2008). Theory testing using quantitative predictions of effect size. Applied Psychology, 57, 589608. doi:10.1111/j.1464-0597.2008.00348.xGoogle Scholar
Wilkinson, L. & Task Force on Statistical Inference (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594604. doi:10.1037/0003-066X.54.8.594CrossRefGoogle Scholar
Zuckerman, M., Hodgins, H. S., Zuckerman, A., & Rosenthal, R. (1993). Contemporary issues in the analysis of data: A survey of 551 psychologists. Psychological Science, 4, 4953. doi:10.1111/j.1467-9280.1993.tb00556.xGoogle Scholar