Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-01-08T14:23:36.636Z Has data issue: false hasContentIssue false

A Speeded Item Response Model with Gradual Process Change

Published online by Cambridge University Press:  01 January 2025

Yuri Goegebeur*
Affiliation:
K.U. Leuven and University of Southern Denmark
Paul De Boeck
Affiliation:
K.U. Leuven
James A. Wollack
Affiliation:
University of Wisconsin
Allan S. Cohen
Affiliation:
University of Georgia
*
Requests for reprints should be sent to Yuri Goegebeur, Department of Psychology, K.U. Leuven, Tiensestraat 102, 3000 Leuven, Belgium. E-mail: yuri.goegebeur@stat.sdu.dk

Abstract

An item response theory model for dealing with test speededness is proposed. The model consists of two random processes, a problem solving process and a random guessing process, with the random guessing gradually taking over from the problem solving process. The involved change point and change rate are considered random parameters in order to model examinee differences in both respects. The proposed model is evaluated on simulated data and in a case study.

Type
Theory and Methods
Copyright
Copyright © 2007 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

The research reported in this paper was supported by IAP P5/24 and GOA/2005/04, both awarded to Paul De Boeck and Iven Van Mechelen, and by IAP P6/03, awarded to Iven Van Mechelen. Yuri Goegebeur’s research was supported by a grant of the Danish Natural Science Research Council.

References

Béguin, A. (2003). A Bayesian estimation procedure for speeded tests. Paper presented at the 13th International Meeting of the Psychometric Society, Sardinia, Italy.Google Scholar
Birnbaum, A. (1968). Some latent trait models and their use in inferring an examinee’s ability. In Lord, F.M. & Novick, M.R. (Eds.), Statistical theories of mental test scores (pp. 394479). Reading: Addison-Wesley.Google Scholar
Bolt, D.M., Cohen, A.S., & Wollack, J.A. (2002). Item parameter estimation under conditions of test speededness: Application of a mixture Rasch model with ordinal constraints. Journal of Educational Measurement, 39, 331348.CrossRefGoogle Scholar
Bolt, D.M., Mroch, A.A., & Kim, J.-S. (2003). An empirical investigation of the Hybrid IRT model for improving item parameter estimation in speeded tests. Paper presented at the annual meeting of the American Educational Research Association, Chicago.Google Scholar
Bradlow, E., Wainer, H., & Wang, X. (1999). A Bayesian random effects model for testlets. Psychometrika, 64, 153168.CrossRefGoogle Scholar
Chen, W.H., & Thissen, D. (1997). Local dependence indexes for item pairs using item response theory. Journal of Educational and Behavioral Statistics, 22, 265289.CrossRefGoogle Scholar
Clayton, D.G. (1978). A model for association in bivariate life tables and its application in epidemiological studies of familial tendency in chronic disease incidence. Biometrika, 65, 141151.CrossRefGoogle Scholar
Douglas, J., Kim, H.R., Habing, B., & Gao, F. (1998). Investigating local dependence with conditional covariance functions. Journal of Educational and Behavioral Statistics, 23, 129151.CrossRefGoogle Scholar
Ferrara, S., Huynh, H., & Michaels, H. (1999). Contextual explanations of local dependence in item clusters in a large-scale hands-on science performance assessment. Journal of Educational Measurement, 36, 119140.CrossRefGoogle Scholar
Frank, M.J. (1979). On the simultaneous associativity of F(x,y) and x+yF(x,y). Aequationes Mathematicae, 19, 194226.CrossRefGoogle Scholar
Glynn, R.J., Laird, N.M., & Rubin, D.B. (1986). Selection modelling versus mixture modelling with nonignorable nonresponse. In Wainer, H. (Eds.), Drawing inferences from self-selected samples (pp. 115142). New York: Springer.CrossRefGoogle Scholar
Goegebeur, Y., De Boeck, P., Molenberghs, G., & del Pino, G. (2006). A local-influence-based diagnostic approach to a speeded item response theory model. Applied Statistics, 55, 647676.Google Scholar
Gumbel, E.J. (1960). Distributions des valuers extrêmes en plusieurs dimensions. Publications de l’Institut de Statistique de l’Université de Paris, 9, 171173.Google Scholar
Ip, E.H. (2001). Testing for local dependency in dichotomous and polytomous item response models. Psychometrika, 66, 109132.CrossRefGoogle Scholar
Joe, H. (1997). Multivariate models and dependence concepts, London: Chapman & Hall.Google Scholar
Lee, M.-L.T. (1996). Properties and applications of Sarmanov’s family of bivariate distributions. Communications in Statistics: Theory and Methods, 25, 12071222.Google Scholar
Lee, G., Kolen, M.J., Frisbie, D.A., & Ankenmann, R.D. (2001). Comparison of dichotomous and polytomous item response models in equating scores from tests composed of testlets. Applied Psychological Measurement, 25, 357372.CrossRefGoogle Scholar
Little, R.J.A., & Rubin, D.B. (1987). Statistical analysis with missing data, New York: Wiley.Google Scholar
Lord, F.M. (1983). Maximum likelihood estimation of item response parameters when some responses are omitted. Psychometrika, 48, 477482.CrossRefGoogle Scholar
Lord, F.M., & Novick, M.R. (1968). Statistical theories of mental test scores, Reading: Addison-Wesley.Google Scholar
NAG, (1993). NAG Fortran library manual—Mark 19. The Numerical Algorithms Group Limited.Google Scholar
Nelsen, R.B. (1999). An introduction to copulas, New York: Springer.CrossRefGoogle Scholar
Oshima, T.C. (1994). The effect of speededness on parameter estimation in item response theory. Journal of Educational Measurement, 31, 200219.CrossRefGoogle Scholar
Pimentel, J.L. (2005). Item response theory modelling with nonignorable missing data. PhD thesis, University of Twente, Enschede.Google Scholar
Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests, Copenhagen: Danish Institute for Educational Research.Google Scholar
Rosenbaum, P. (1984). Testing the conditional independence and monotonicity assumptions of item response theory. Psychometrika, 49, 425435.CrossRefGoogle Scholar
Rosenbaum, P. (1988). Item bundles. Psychometrika, 53, 349359.CrossRefGoogle Scholar
Rost, J. (1990). Rasch models in latent classes: An integration of two approaches to item analysis. Applied Psychological Measurement, 14, 271282.CrossRefGoogle Scholar
Sireci, S.G., Thissen, D., & Wainer, H. (1991). On the reliability of testlet-based tests. Journal of Educational Measurement, 28, 237247.CrossRefGoogle Scholar
Sklar, A. (1959). Fonctions de repartition a n dimensions et leurs marges. Publications de l’Institut de Statistique de l’Université de Paris, 8, 229231.Google Scholar
Stout, W.F. (1987). A nonparametric approach for assessing latent trait dimensionality. Psychometrika, 52, 589617.CrossRefGoogle Scholar
Stout, W.F. (1990). A new item response theory modelling approach with application to unidimensionality assessment and ability estimation. Psychometrika, 55, 293325.CrossRefGoogle Scholar
Stout, W.F., Habing, B., Douglas, J., Kim, H., Roussos, L., & Zhang, J. (1996). Conditional covariance based nonparametric multidimensionality assessment. Applied Psychological Measurement, 20, 331354.CrossRefGoogle Scholar
Thissen, D., Steinberg, L., & Mooney, J. (1989). Trace lines for testlets: A use of multiple-categorical response models. Journal of Educational Measurement, 26, 247260.CrossRefGoogle Scholar
Tuerlinckx, F., & De Boeck, P. (2001). The effect of ignoring item interactions on the estimated discrimination parameters in item response theory. Psychological Methods, 6, 181195.CrossRefGoogle ScholarPubMed
Tuerlinckx, F., & De Boeck, P. (2004). Models for residual dependencies. In De Boeck, P. & Wilson, M. (Eds.), Exploratory item response models—a generalized linear and nonlinear approach (pp. 289316). New York: Springer.CrossRefGoogle Scholar
van den Wollenberg, A.L. (1982). Two new test statistics for the Rasch model. Psychometrika, 47, 123140.CrossRefGoogle Scholar
Wainer, H., & Thissen, D. (1996). How is reliability related to the quality of test scores? What is the effect of local dependence on reliability?. Educational Measurement: Issues and Practice, 15, 2229.CrossRefGoogle Scholar
Wollack, J.A., & Cohen, A.S. (2004). A model for simulating speeded test data. Paper presented at the annual meeting of the American Educational Research Association, San Diego.Google Scholar
Wollack, J.A., Cohen, A.S., & Wells, C.S. (2003). The effects of test speededness on score scale stability. Journal of Educational Measurement, 40, 307330.CrossRefGoogle Scholar
Yamamoto, K. (1987). A model that combines IRT and latent class models. PhD thesis, University of Illinois, Champaign-Urbana.Google Scholar
Yamamoto, K., & Everson, H. (1997). Modeling the effects of test length and test time on parameter estimation using the hybrid model. In Rost, J. & Langeheine, R. (Eds.), Applications of latent trait and latent class models in the social sciences (pp. 8999). New York: Waxmann.Google Scholar
Yen, W.M. (1984). Effects of local item dependence on the fit and equating performance of the three-parameter logistic model. Applied Psychological Measurement, 8, 125145.CrossRefGoogle Scholar
Yen, W.M. (1993). Scaling performance assessments: Strategies for managing local item dependence. Journal of Educational Measurement, 30, 187213.CrossRefGoogle Scholar