Hostname: page-component-5f745c7db-2kk5n Total loading time: 0 Render date: 2025-01-06T07:37:47.520Z Has data issue: true hasContentIssue false

Predicting Rights-Only Score Distributions from Data Collected Under Formula Score Instructions

Published online by Cambridge University Press:  01 January 2025

Hongwen Guo*
Affiliation:
Educational Testing Service
*
Correspondence should be made to Hongwen Guo, Educational Testing Service, Princeton, NJ USA. Email: Hguo@ets.org

Abstract

Under a formula score instruction (FSI), test takers omit items. If students are encouraged to answer every item (under a rights-only scoring instruction, ROI), the score distribution will be different. In this paper, we formulate a simple statistical model to predict the score ROI distribution using the FSI data. Estimation error is also provided. In addition, a preliminary investigation of the probability of guessing correctly on omitted items and its sensitivity is presented in the paper. Based on the data used in this paper, the probability of guessing correctly may be close or slightly greater than the chance score.

Type
Original Paper
Copyright
Copyright © 2016 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Albano, A. (2014). Package ‘equate’. Retrieved from https://cran.r-project.org/web/packages/equate/equateGoogle Scholar
Angoff, W. H., & Schrader, W. B. (1984). A study of hypotheses basic to the use of rights and formula scoring. Journal of Educational Measurement, 21, 117.CrossRefGoogle Scholar
Bliss, L. (1980). A test of Lord’s assumption regarding examinee guessing behavior on multiple-choice tests using elementary school students. Journal of Educational Measurement, 17, 147153.CrossRefGoogle Scholar
Budescu, D., & Bar-Hillel, M. (1993). To guess or not to guess: A decision-theoretic view of formula scoring. Journal of Educational Measurement, 30, 277291.CrossRefGoogle Scholar
College Board. (2015). What new scoring means to you. Retrieved from https://collegereadiness.collegeboard.org/sat/scores/new-scores-meaning-for-studentsGoogle Scholar
Durrett, R. (1995). Probability: Theory and examples, 2Belmont, CA: Duxbury Press.Google Scholar
Espinosa, M. P., & Gardeazabal, J. (2010). Optimal correction for guessing in multiple-choice tests. Journal of Mathematical psychology, 54, 415425.CrossRefGoogle Scholar
Higham, P. (2007). No special K!\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K!$$\end{document} A signal detection framework for the strategic regulation of memory accuracy. Journal of Experimental Psychology: General, 136, 122.CrossRefGoogle Scholar
Holland, P. W., & Dorans, N. J., Brennan, R. L. (2006). Linking and equating. Educational measurement, 4Westport, CT: American Council on Education and Praeger Publishers.Google Scholar
Kolen, M. J., & Brennan, R. L. (2004). Test equating, scaling, and linking: Methods and practices, 2New York: Springer.CrossRefGoogle Scholar
Lord, F. (1980). Applications of item response theory to practical testing problems, Hillsdale, NJ: Lawrence Erbaum Associates.Google Scholar
Lord, F. M. (1983). Maximum likelihood estimation of item response parameters when some responses are omitted. Psychometrika, 48, 477482.CrossRefGoogle Scholar
Lord, F., & Novick, M. (1968). Statistical theories of mental test scores, Oxford, England: Addison-Wesley.Google Scholar
Thorndike, R. L. (1971). Educational measurement, Washington, DC: American Council on Education.Google Scholar