Hostname: page-component-5f745c7db-8qdnt Total loading time: 0 Render date: 2025-01-06T07:12:36.267Z Has data issue: true hasContentIssue false

The Reduced RUM as a Logit Model: Parameterization and Constraints

Published online by Cambridge University Press:  01 January 2025

Chia-Yi Chiu*
Affiliation:
Rutgers, The State University of New Jersey
Hans-Friedrich Köhn
Affiliation:
University of Illinois at Urbana-Champaign
*
Correspondence should be made to Chia-Yi Chiu, Rutgers, The State University of New Jersey, New Brunswick, NJ, USA. Email: chia-yi.chiu@gse.rutgers.edu

Abstract

Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided.

Type
Article
Copyright
Copyright © 2015 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bolt, D., Chen, H., DiBello, L., Hartz, S., Henson, R., Roussos, L., Stout, W., Templin, J. (2008). The Arpeggio Suite: Software for cognitive skills diagnostic assessment (Computer software), St. Paul, MN: Assessment SystemsGoogle Scholar
Buck, G., Tatsuoka, K.K. (1998). Application of the rule-space procedure to language testing: Examining attributes of a free response listening test. Language Testing, 15, 119157CrossRefGoogle Scholar
Chiu, C.-Y., Douglas, J.A., Li, X. (2009). Cluster analysis for cognitive diagnosis: Theory and applications. Psychometrika, 74, 633665CrossRefGoogle Scholar
de la Torre, J. (2009). DINA model and parameter estimation: A didactic. Journal of Educational and Behavioral Statistics, 34, 115130CrossRefGoogle Scholar
de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76, 179199CrossRefGoogle Scholar
DiBello, L.V., Roussos, L.A., Stout, W.F. (2007). Review of cognitively diagnostic assessment and a summary of psychometric models. In Rao, C.R., Sinharay, S. (Eds.), Handbook of statistics: Volume 26. Psychometrics (pp. 9791030), Amsterdam: ElsevierGoogle Scholar
Feng, Y., Habing, B.T., Huebner, A. (2014). Parameter estimation of the Reduced RUM using the EM algorithm. Applied Psychological Measurement, 38, 137150CrossRefGoogle Scholar
Haberman, S.J., von Davier, M. (2007). Some notes on models for cognitively based skill diagnosis. In Rao, C.R., Sinharay, S. (Eds.), Handbook of statistics: Volume 26. Psychometrics (pp. 10311038), Amsterdam: ElsevierGoogle Scholar
Hartz, S. M. (2002). A Bayesian framework for the Unified Model for assessing cognitive abilities: Blending theory with practicality (Doctoral dissertation). Available from ProQuest Dissertations and Theses Database (UMI No. 3044108).Google Scholar
Hartz, S. M., Roussos, L. A., Henson, R. A., & Templin, J. L. (2005). The fusion model for skill diagnosis: Blending theory with practicality, Unpublished manuscript.Google Scholar
Henson, R., Douglas, J. (2005). Test construction for cognitive diagnosis. Applied Psychological Measurement, 29, 262277CrossRefGoogle Scholar
Henson, R., Roussos, L.A., Douglas, J., He, X. (2008). Cognitive diagnostic attribute-level discrimination indices. Applied Psychological Measurement, 32, 275288CrossRefGoogle Scholar
Henson, R., Templin, J.L., Douglas, J. (2007). Using efficient model based sum-scores for conducting skills diagnoses. Journal of Educational Measurement, 44, 361376CrossRefGoogle Scholar
Henson, R. A., & Templin, J. (2007, April). Large-scale language assessment using cognitive diagnosis models. Paper presented at the Annual Meeting of the National Council on Measurement in Education, Chicago, IL.Google Scholar
Henson, R.A., Templin, J.L., Willse, J.T. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74, 191210CrossRefGoogle Scholar
Junker, B.W., Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258272CrossRefGoogle Scholar
Kim, Y.-H. (2011). Diagnosing EAP writing ability using the Reduced Reparameterized Unified model. Language Testing, 28, 509541CrossRefGoogle Scholar
Leighton, J., Gierl, M. (2007). Cognitive diagnostic assessment for education: Theory and applications, Cambridge, UK: Cambridge University PressCrossRefGoogle Scholar
Liu, Y., Douglas, J.A., Henson, R.A. (2009). Testing person fit in cognitive diagnosis. Applied Psychological Measurement, 33, 579598CrossRefGoogle Scholar
Lunn, D., Spiegelhalter, D., Thomas, A., Best, N. (2009). The BUGS project: Evolution, critique, and future directions. Statistics in Medicine, 28, 30493067CrossRefGoogle ScholarPubMed
Macready, G.B., Dayton, C.M. (1977). The use of probabilistic models in the assessment of mastery. Journal of Educational Statistics, 33, 379416Google Scholar
Muthén, L. K., & Muthén, B. O. (1998–2011). Mplus user’s guide (Version 6.1) (Computer software and manual). Los Angeles: Muthén & Muthén.Google Scholar
Rupp, A.A., Templin, J.L., Henson, R.A. (2010). Diagnostic measurement. Theory, methods, and applications, New York: GuilfordGoogle Scholar
Tatsuoka, K.K. (1983). Rule-space: An approach for dealing with misconceptions based on item-response theory. Journal of Educational Measurement, 20, 345354CrossRefGoogle Scholar
Tatsuoka, K.K. (1985). A probabilistic model for diagnosing misconception in the pattern classification approach. Journal of Educational Statistics, 12, 5573CrossRefGoogle Scholar
Templin, J., Bradshaw, L. (2014). Hierarchical diagnostic classification models: A family of models for estimating and testing attribute hierarchies. Psychometrika, 79, 317339CrossRefGoogle ScholarPubMed
Templin, J., Hoffman, L. (2013). Obtaining diagnostic classification model estimates using Mplus. Educational Measurement: Issues and Practice, 32, 3750CrossRefGoogle Scholar
Templin, J.L., Henson, R.A. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11, 287305CrossRefGoogle ScholarPubMed
Templin, J.L., Henson, R.A., Templin, S.E., Roussos, L.A. (2008). Robustness of hierarchical modeling of skill association in cognitive diagnosis models. Applied Psychological Measurement, 32, 559574CrossRefGoogle Scholar
Vermunt, J.K., Magidson, J. (2000). Latent GOLD’s users’s guide, Boston: Statistical Innovations Inc.Google Scholar
von Davier, M. (2005, September). A general diagnostic model applied to language testing data (Research report No. RR-05-16). Princeton, NJ: Educational Testing Service.CrossRefGoogle Scholar
von Davier, M. (2008). A general diagnostic model applied to language testing data. British Journal of Mathematical and Statistical Psychology, 61, 287301CrossRefGoogle ScholarPubMed
von Davier, M. (2011, September). Equivalency of the DINA model and a constrained general diagnostic model (Research report No. RR-11-37). Princeton, NJ: Educational Testing Service.CrossRefGoogle Scholar