Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2025-01-04T00:48:23.706Z Has data issue: false hasContentIssue false

An Item Response Model for Nominal Data Based on the Rising Selection Ratios Criterion

Published online by Cambridge University Press:  01 January 2025

Javier Revuelta*
Affiliation:
Autonoma University of Madrid
*
Requests for reprints should be sent to Javier Revuelta, Departamento de Psicologia Social y Metodologia, Universidad Autonoma Madrid, Cantoblanco 28049, Madrid, Spain. Email: javier.revuelta@uam.es

Abstract

Complete response vectors of all answer options in multiple-choice items can be used to estimate ability. The rising selection ratios criterion is necessary for scoring individuals because it implies that estimated ability always increases when the correct alternative is selected. This paper introduces the generalized DLT model, which assumes rising selection ratios and uses three parameters to describe each incorrect alternative. It is shown that the nominal categories and the Thissen and Steinberg model do not meet the criterion. A simulation study on goodness of recovery and an example with real data are also included.

Type
Original Paper
Copyright
Copyright © 2005 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This research was partially supported by the DGICYT grant PB 97-0049 and the Comunidad de Madrid grant 06/HSE/0005/2004. I would like to express my gratitude to three anonymous reviewers, the associate editor and two editors for their helpful and thoughtful comments that contributed for improving an earlier version of the manuscript.

References

Abrahamowicz, M., & Ramsay, J.O. (1992). Multicategorical spline model for item response theory. Psychometrika, 57, 527.CrossRefGoogle Scholar
Abramowitz, M., & Stegun, I.A. (1965). Handbook of mathematical functions. New York: Dover Publications Inc..Google Scholar
Andersen, E.B. (1980). Discrete Statistical Models with Social Science Applications. North Holland: Amsterdam.Google Scholar
Birnbaum, A. (1968). Some latent trait models and their use in inferring an examinee’s ability. In Lord, F.M., & Novick, M.R. (Eds.), Statistical Theories of Mental Tests Scores. Reading, MA: Addison-Wesley.Google Scholar
Bock, R.D. (1972). Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika, 37, 2951.CrossRefGoogle Scholar
Bock, R.D., & Aitkin, M. (1981). Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika, 46, 443459.CrossRefGoogle Scholar
De Ayala, R.J., & Sava-Bolesta, M. (1999). Item parameter recovery for the nominal response model. Applied Psychological Measurement, 23, 319.CrossRefGoogle Scholar
Dempster, A.P., Laird, N.M., & Rubin, D.B. (1977). Maximum likelihood from incomplete data via the EM algorithm (with discussion). Journal of the Royal Statistical Society series B, 39, 138.CrossRefGoogle Scholar
Fox, J.P., & Glas, C.A.W. (2001). Bayesian estimation of a multilevel IRT model using Gibbs Sampling. Psychometrika, 66, 269286.CrossRefGoogle Scholar
Gelman, A., Carlin, J.B., Stern, H.S., & Rubin, D.B. (1995). Bayesian Data Analysis. London: Chapman and Hall.CrossRefGoogle Scholar
Goodman, L.A., & Kruskal, W.H. (1979). Measures of Association for Cross Classifications. New York: Springer-Verlag.CrossRefGoogle Scholar
Louis, T.A. (1982). Finding the observed information matrix when using the EM algorithm. Journal of the Royal Statistical Society, series B, 44, 226233.CrossRefGoogle Scholar
Love, T.E. (1997). Distractor selection ratios. Psychometrika, 62, 5162.CrossRefGoogle Scholar
Luce, R.D. (1959). Individual Choice Behavior: A Theoretical Analysis. New York: Wiley.Google Scholar
Mokken, R.J. (1997). Nonparametric models for dichotomous responses. In van der Linden, W.J., Hambleton, R.K. (Eds.), Handbook of Modern Item Response Theory. New York: Springer.Google Scholar
Molenaar, I.W. (1997). Nonparametric models for polytomous responses. In van der Linden, W.J., & Hambleton, R.K. (Eds.), Handbook of Modern Item Response Theory. New York: Springer.Google Scholar
Ogasawara, H. (2002). Stable response functions with unstable item parameter estimates. Applied Psychological Measurement, 26, 239254.CrossRefGoogle Scholar
Patz, R.J., & Junker, B.W. (1999). Applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses. Journal of Educational and Behavioral Statistics, 24, 342366.CrossRefGoogle Scholar
Press, W.H., Flannery, B.P., Teukolsky, S.A., & Vetterling, W.T. (1988). Numerical recipes in C. The Art of Scientific Computing. Cambridge: Cambridge University Press.Google Scholar
Revuelta, J. (2004). Analysis of distractor difficulty in multiple-choice items. Psychometrika, 69, 217234.CrossRefGoogle Scholar
Rosenbaum, P.R. (1984). Testing the conditional independence and monotonicity assumptions of items response theory. Psychometrika, 49, 425435.CrossRefGoogle Scholar
Rubinstein, R.Y. (1981). Simulation and the Monte Carlo Method. John Wiley and Sons Inc.: New York.CrossRefGoogle Scholar
Samejima, F. (1994). Estimation of reliability coefficients using the test information function and its modifications. Applied Psychological Measurement, 18, 229244.CrossRefGoogle Scholar
Stoer, J., & Bulirsch, R. (1980). Introduction to Numerical Analysis. New York: Springer-Verlag.CrossRefGoogle Scholar
Thissen, D. (1991). MULTILOG User’s guide—version 6. Chicago: IL: Scientific Software.Google Scholar
Thissen, D., & Steinberg, L. (1984). A response model for multiple-choice items. Psychometrika, 49, 501519.CrossRefGoogle Scholar
Thissen, D., & Wainer, H. (1982). Some standard errors in item response theory. Psychometrika, 47, 397412.CrossRefGoogle Scholar
Thissen, D., Steinberg, L., & Fitzpatrick, A.R. (1989). Multiple-choice items: The distractors are also part of the item. Journal of Educational Measurement, 26, 161176.CrossRefGoogle Scholar