Hostname: page-component-5f745c7db-nc56l Total loading time: 0 Render date: 2025-01-06T07:05:05.182Z Has data issue: true hasContentIssue false

Online Calibration Methods for the DINA Model with Independent Attributes in CD-CAT

Published online by Cambridge University Press:  01 January 2025

Ping Chen*
Affiliation:
Beijing Normal University
Tao Xin
Affiliation:
Beijing Normal University
Chun Wang
Affiliation:
University of Illinois at Urbana-Champaign
Hua-Hua Chang
Affiliation:
University of Illinois at Urbana-Champaign
*
Requests for reprints should be sent to Ping Chen, National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, No.19, Xin Jie Kou Wai Street, Hai Dian District, Beijing 100875, China. E-mail: musclechen@126.com

Abstract

Item replenishing is essential for item bank maintenance in cognitive diagnostic computerized adaptive testing (CD-CAT). In regular CAT, online calibration is commonly used to calibrate the new items continuously. However, until now no reference has publicly become available about online calibration for CD-CAT. Thus, this study investigates the possibility to extend some current strategies used in CAT to CD-CAT. Three representative online calibration methods were investigated: Method A (Stocking in Scale drift in on-line calibration. Research Rep. 88-28, 1988), marginal maximum likelihood estimate with one EM cycle (OEM) (Wainer & Mislevy In H. Wainer (ed.) Computerized adaptive testing: A primer, pp. 65–102, 1990) and marginal maximum likelihood estimate with multiple EM cycles (MEM) (Ban, Hanson, Wang, Yi, & Harris in J. Educ. Meas. 38:191–212, 2001). The objective of the current paper is to generalize these methods to the CD-CAT context under certain theoretical justifications, and the new methods are denoted as CD-Method A, CD-OEM and CD-MEM, respectively. Simulation studies are conducted to compare the performance of the three methods in terms of item-parameter recovery, and the results show that all three methods are able to recover item parameters accurately and CD-Method A performs best when the items have smaller slipping and guessing parameters. This research is a starting point of introducing online calibration in CD-CAT, and further studies are proposed for investigations such as different sample sizes, cognitive diagnostic models, and attribute-hierarchical structures.

Type
Original Paper
Copyright
Copyright © 2012 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ban, J.-C., Hanson, B.H., Wang, T., Yi, Q., Harris, D.J. (2001). A comparative study of on-line pretest item-calibration/scaling methods in computerized adaptive testing. Journal of Educational Measurement, 38, 191212CrossRefGoogle Scholar
Ban, J.-C., Hanson, B.H., Yi, Q., & Harris, D.J. (2002). Data sparseness and online pretest item calibration/scaling methods in CAT (ACT Research Report 02-01). Iowa City, IA, ACT, Inc. Available at http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/19/da/e9.pdf. Google Scholar
Chang, Y.-C.I., Lu, H. (2010). Online calibration via variable length computerized adaptive testing. Psychometrika, 75, 140157CrossRefGoogle Scholar
Cheng, Y. (2009). When cognitive diagnosis meets computerized adaptive testing. Psychometrika, 74, 619632CrossRefGoogle Scholar
Cheng, Y., & Chang, H. (2007). The modified maximum global discrimination index method for cognitive diagnostic computerized adaptive testing. Paper presented at the 2007 GMAC Conference on Computerized Adaptive Testing, McLean, USA, June. Google Scholar
Dibello, L.V., Stout, W.F., Roussos, L.A. (1995). Unified cognitive/psychometric diagnostic assessment likelihood-based classification techniques. In Nichols, P., Chipman, S., Brennan, R. Cognitively diagnostic assessments, Hillsdale: Erlbaum 361389Google Scholar
de la Torre, J. (2009). DINA model and parameter estimation: a didactic. Journal of Educational and Behavioral Statistics, 34, 115130CrossRefGoogle Scholar
de la Torre, J., Douglas, J.A. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69, 333353CrossRefGoogle Scholar
Doignon, J.P., Falmagne, J.C. (1999). Knowledge spaces, New York: SpringerCrossRefGoogle Scholar
Embretson, S. (1984). A general latent trait model for response processes. Psychometrika, 49, 175186CrossRefGoogle Scholar
Embretson, S., Reise, S. (2000). Item response theory for psychologists, Mahwah: ErlbaumGoogle Scholar
Fedorov, V.V. (1972). Theory of optimal design, New York: Academic PressGoogle Scholar
Haertel, E.H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26, 333352CrossRefGoogle Scholar
Hartz, S.M. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: Blending theory with practicality (Unpublished doctoral dissertation). University of Illinois at Urbana-Champaign, Urbana-Champaign, IL. Google Scholar
Junker, B.W., Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258272CrossRefGoogle Scholar
Leighton, J.P., Gierl, M.J., Hunka, S.M. (2004). The attribute hierarchy method for cognitive assessment: a variation on Tatsuoka’s rule-space approach. Journal of Educational Measurement, 41, 205237CrossRefGoogle Scholar
Liu, H., You, X., Wang, W., Ding, S., & Chang, H. (2010). Large-scale applications of cognitive diagnostic computerized adaptive testing in China. Paper presented at the annual meeting of National Council on Measurement in Education, Denver, CO, April. Google Scholar
Macready, G.B., Dayton, C.M. (1977). The use of probabilistic models in the assessment of mastery. Journal of Educational Statistics, 33, 379416Google Scholar
Makransky, G. (2009). An automatic online calibration design in adaptive testing. Paper presented at the 2007 GMAC Conference on Computerized Adaptive Testing, McLean, USA, June. Google Scholar
Maris, E. (1999). Estimating multiple classification latent class models. Psychometrika, 64, 187212CrossRefGoogle Scholar
McGlohen, M.K. (2004). The application of cognitive diagnosis and computerized adaptive testing to a large-scale assessment. Unpublished doctoral thesis, University of Texas at Austin. Google Scholar
McGlohen, M.K., Chang, H. (2008). Combining computer adaptive testing technology with cognitively diagnostic assessment. Behavior Research Methods, 40, 808821CrossRefGoogle ScholarPubMed
Rupp, A., Templin, J. (2008). The effects of Q-matrix misspecification on parameter estimates and classification accuracy in the DINA model. Educational and Psychological Measurement, 68, 7896CrossRefGoogle Scholar
Silvey, S.D. (1980). Optimal design, London: Chapman and HallCrossRefGoogle Scholar
Stocking, M.L. (1988). Scale drift in on-line calibration (Research Rep. 88-28). Princeton, NJ: ETS. CrossRefGoogle Scholar
Tatsuoka, K.K. (1995). Architecture of knowledge structures and cognitive diagnosis: a statistical pattern classification approach. In Nichols, P., Chipman, S., Brennan, R. Cognitively diagnostic assessments, Hillsdale: Erlbaum 327359Google Scholar
Tatsuoka, C. (2002). Data analytic methods for latent partially ordered classification models. Journal of the Royal Statistical Society. Series C, Applied Statistics, 51, 337350CrossRefGoogle Scholar
Tatsuoka, K.K., Tatsuoka, M.M. (1997). Computerized cognitive diagnostic adaptive testing: effect on remedial instruction as empirical validation. Journal of Educational Measurement, 34, 320CrossRefGoogle Scholar
Templin, J., Henson, R. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11, 287305CrossRefGoogle ScholarPubMed
Wainer, H. (1990). Computerized adaptive testing: A primer, Hillsdale: ErlbaumGoogle Scholar
Wainer, H., Mislevy, R.J. (1990). Item response theory, item calibration, and proficiency estimation. In Wainer, H. Computerized adaptive testing: A primer, Hillsdale: Erlbaum 65102Google Scholar
Weiss, D.J. (1982). Improving measurement quality and efficiency with adaptive testing. Applied Psychological Measurement, 6, 473492CrossRefGoogle Scholar
Xu, X., Chang, H., & Douglas, J. (2003). A simulation study to compare CAT strategies for cognitive diagnosis. Paper presented at the annual meeting of National Council on Measurement in Education, Chicago, IL, April. Google Scholar