Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-01-08T03:50:22.107Z Has data issue: false hasContentIssue false

A Speeded Item Response Model: Leave the Harder till Later

Published online by Cambridge University Press:  01 January 2025

Yu-Wei Chang*
Affiliation:
National Tsing-Hua University
Rung-Ching Tsai
Affiliation:
National Taiwan Normal University
Nan-Jung Hsu
Affiliation:
National Tsing-Hua University
*
Requests for reprints should be sent to Yu-Wei Chang, Institute of Statistics, National Tsing-Hua University, No. 101, Sec. 2, Kuang-Fu Road, Hsinchu 30013, Taiwan. E-mail: ywchang1225@gmail.com

Abstract

A speeded item response model is proposed. We consider the situation where examinees may retain the harder items to a later test period in a time limit test. With such a strategy, examinees may not finish answering some of the harder items within the allocated time. In the proposed model, we try to describe such a mechanism by incorporating a speeded-effect term into the two-parameter logistic item response model. A Bayesian estimation procedure of the current model using Markov chain Monte Carlo is presented, and its performance over the two-parameter logistic item response model in a speeded test is demonstrated through simulations. The methodology is applied to physics examination data of the Department Required Test for college entrance in Taiwan for illustration.

Type
Original Paper
Copyright
Copyright © 2013 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bejar, I.I. (1985). Test speededness under number-right scoring: an analysis of the test of English as a foreign language (Research Report RR-85-11). Princeton: Educational Testing Service. Google Scholar
Bolt, D.M., Cohen, A.S., Wollack, J.A. (2002). Item parameter estimation under conditions of test speededness: application of a mixture Rasch model with ordinal constraints. Journal of Educational Measurement, 39, 331348CrossRefGoogle Scholar
Bridgeman, B., Cline, F. (2004). Effects of differentially time-consuming tests on computerized-adaptive test scores. Journal of Educational Measurement, 41, 137148CrossRefGoogle Scholar
Cao, J., Stokes, S.L. (2008). Bayesian IRT guessing models for partial guessing behaviors. Psychometrika, 73, 209230CrossRefGoogle Scholar
De Boeck, P., Wilson, M. (2004). Explanatory item response models, New York: SpringerCrossRefGoogle Scholar
Evans, F.R., Reilly, R.R. (1972). A study of test speededness as a source of bias. Journal of Educational Measurement, 9, 123131CrossRefGoogle Scholar
Fox, J.-P. (2010). Bayesian item response modeling-theory and applications, New York: SpringerCrossRefGoogle Scholar
Gelman, A., Rubin, D.B. (1992). Inference from iterative simulation using multiple sequences (with discussion). Statistical Science, 7, 457511CrossRefGoogle Scholar
Glas, C.A.W., Pimentel, J. (2008). Modeling nonignorable missing data in speeded tests. Educational and Psychological Measurement, 68, 907922CrossRefGoogle Scholar
Goegebeur, Y., De Boeck, P., Wollack, J.A., Cohen, A.S. (2008). A speeded item response model with gradual process change. Psychometrika, 73, 6587CrossRefGoogle Scholar
Holman, R., Glas, C.A.W. (2005). Modeling non-ignorable missing data mechanisms with item response theory models. British Journal of Mathematical & Statistical Psychology, 58, 118Google ScholarPubMed
Kingston, N.M., Dorans, N.J. (1984). Item location effects and their implications for IRT equating and adaptive testing. Applied Psychological Measurement, 8, 147154CrossRefGoogle Scholar
Lord, F.M. (1975). Formula scoring and number-right scoring. Journal of Educational Measurement, 12, 711CrossRefGoogle Scholar
Lord, F.M. (1983). Maximum likelihood estimation of item response parameters when some responses are omitted. Psychometrika, 48, 477482CrossRefGoogle Scholar
O’Muircheartaigh, C., Moustaki, I. (1999). Symmetric pattern models: a latent variable approach to item non-response in attitude scales. Journal of the Royal Statistical Society. Series A, 162, 177194CrossRefGoogle Scholar
Oshima, T.C. (1994). The effect of speededness on parameter estimation in item response theory. Journal of Educational Measurement, 31, 200219CrossRefGoogle Scholar
Rost, J. (1990). Rasch models in latent classes: an integration of two approaches to item analysis. Applied Psychological Measurement, 14, 271282CrossRefGoogle Scholar
Spiegelhalter, D.J., Best, N.G., Carlin, B.P., van der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society. Series B, 64, 583616CrossRefGoogle Scholar
Swaminathan, H., Gifford, J.A. (1986). Bayesian estimation in the three-parameter logistic model. Psychometrika, 51, 589601CrossRefGoogle Scholar
van der Linden, W.J. (2011). Setting time limits on tests. Applied Psychological Measurement, 35, 183199CrossRefGoogle Scholar
van der Linden, W.J., Breithaupt, K., Chuah, S.C., Zhang, Y. (2007). Detecting differential speededness in multistage testing. Journal of Educational Measurement, 44, 117130CrossRefGoogle Scholar
Wollack, J.A., Cohen, A.S., Wells, C.S. (2003). A method of maintaining scale stability in the presence of test speededness. Journal of Educational Measurement, 40, 307330CrossRefGoogle Scholar
Yamamoto, K. (1995). Estimating the effects of test length and test time on parameter estimation using the HYBRID model (TOEFL Technical Report No. TR-10). Google Scholar
Yamamoto, K., Everson, H. (1997). Modeling the effects of test length and test time on parameter estimation using the hybrid model. In Rost, J., Langeheine, R. (Eds.), Applications of latent trait and latent class models in the social sciences, New York: Waxmann 8998Google Scholar
Yen, W.M. (1993). Scaling performance assessments: strategies for managing local item dependence. Journal of Educational Measurement, 30, 187213CrossRefGoogle Scholar