Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2025-01-05T20:36:18.776Z Has data issue: false hasContentIssue false

A Procedure for Assessing the Completeness of the Q-Matrices of Cognitively Diagnostic Tests

Published online by Cambridge University Press:  01 January 2025

Hans-Friedrich Köhn
Affiliation:
University of Illinois at Urbana-Champaign
Chia-Yi Chiu*
Affiliation:
Rutgers, The State University of New Jersey
*
Correspondence should be made to Chia-Yi Chiu, Rutgers, The State University of New Jersey, New Brunswick, NJ USA. Email: chia-yi.chiu@gse.rutgers.edu

Abstract

The Q-matrix of a cognitively diagnostic test is said to be complete if it allows for the identification of all possible proficiency classes among examinees. Completeness of the Q-matrix is therefore a key requirement for any cognitively diagnostic test. However, completeness of the Q-matrix is often difficult to establish, especially, for tests with a large number of items involving multiple attributes. As an additional complication, completeness is not an intrinsic property of the Q-matrix, but can only be assessed in reference to a specific cognitive diagnosis model (CDM) supposed to underly the data—that is, the Q-matrix of a given test can be complete for one model but incomplete for another. In this article, a method is presented for assessing whether a given Q-matrix is complete for a given CDM. The proposed procedure relies on the theoretical framework of general CDMs and is therefore legitimate for CDMs that can be reparameterized as a general CDM.

Type
Original Paper
Copyright
Copyright © 2016 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bradshaw, L., Izsak, A., Templin, J., & Jacobson, E. (2014). Diagnosing teachers understandings of rational numbers: Building a multidimensional test within the diagnostic classification framework. Educational Measurement: Issues and Practice, 33, 214.CrossRefGoogle Scholar
Chen, J., de la Torre, J., & Zhang, Z. (2013). Relative and absolute fit evaluation in cognitive diagnosis modeling. Journal of Educational Measurement, 50, 123140.CrossRefGoogle Scholar
Chiu, C.-Y. (2013). Statistical refinement of the Q-matrix in cognitive diagnosis. Applied Psychological Measurement, 37, 598618.CrossRefGoogle Scholar
Chiu, C.-Y., & Douglas, J. A. (2013). A nonparametric approach to cognitive diagnosis by proximity to ideal response profiles. Journal of Classification, 30, 225250.CrossRefGoogle Scholar
Chiu, C.-Y., & Köhn, H.-F. (2016). The reduced RUM as a logit model: Parameterization and constraints. Psychometrika, 81, 350370.CrossRefGoogle ScholarPubMed
Chiu, C.-Y., Douglas, J. A., & Li, X. (2009). Cluster analysis for cognitive diagnosis: Theory and applications. Psychometrika, 74, 633665.CrossRefGoogle Scholar
de la Torre, J. (2008). An empirically based method of Q-matrix validation for the DINA model: Development and applications. Journal of Educational Measurement, 45, 343362.CrossRefGoogle Scholar
de la Torre, J. (2009). A cognitive diagnosis model for cognitively based multiple-choice options. Applied Psychological Measurement, 33, 163–183; Journal of Educational and Behavioral Statistics, 34, 115–130.CrossRefGoogle Scholar
de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76, 179199.CrossRefGoogle Scholar
de la Torre, J. (2014). Personal Communication.Google Scholar
de la Torre, J., & Douglas, J. A. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69, 333353.CrossRefGoogle Scholar
de la Torre, J., & Douglas, J. A. (2008). Model evaluation and multiple strategies in cognitive diagnosis: An analysis of fraction subtraction data. Psychometrika, 73, 595624.CrossRefGoogle Scholar
de la Torre, J., & Lee, Y.-S. (2013). Evaluating the Wald test for item-level comparison of saturated and reduced models in cognitive diagnosis. Journal of Educational Measurement, 50, 355373.CrossRefGoogle Scholar
DeCarlo, L. T. (2011). On the analysis of fraction subtraction data: The DINA model, classification, latent class sizes, and the Q-matrix. Applied Psychological Measurement, 35, 826.CrossRefGoogle Scholar
DeCarlo, L. T. (2012). Recognizing uncertainty in the Q-matrix via a Bayesian extension of the DINA model. Applied Psychological Measurement, 36, 447468.CrossRefGoogle Scholar
DiBello, L. V., Roussos, L. A., Stout, W. F., Rao, C. R., & Sinharay, S. (2007). Review of cognitively diagnostic assessment and a summary of psychometric models. Handbook of statistics: Psychometrics, Amsterdam: Elsevier 9791030.Google Scholar
Feng, Y., Habing, B. T., & Huebner, A. (2013). Parameter estimation of the Reduced RUM using the EM algorithm. Applied Psychological Measurement.Google Scholar
Haberman, S. J., von Davier, M., Rao, C. R., & Sinharay, S. (2007). Some notes on models for cognitively based skill diagnosis. Handbook of statistics: Psychometrics, Amsterdam: Elsevier 10311038.Google Scholar
Hartz, S. M. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: Blending theory with practicality (Doctoral dissertation). Available from ProQuest Dissertations and Theses database (UMI No. 3044108).Google Scholar
Hartz, S. M., & Roussos, L. A. (October 2008). The fusion model for skill diagnosis: Blending theory with practicality. (Research report No. RR-08-71). Princeton, NJ: Educational Testing Service.CrossRefGoogle Scholar
Henson, R. A., Templin, J. L., & Willse, J. T. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74, 191210.CrossRefGoogle Scholar
Jang, E. E. (2009). Cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for Fusion Model application to LanguEdge assessment. Language Testing, 26, 3173.CrossRefGoogle Scholar
Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258272.CrossRefGoogle Scholar
Jurich, D. P., & Bradshaw, L. P. (2014). An illustration of diagnostic classification modeling in student learning outcomes assessment. International Journal of Testing, 14, 4972.CrossRefGoogle Scholar
Kim, Y.-H. (2011). Diagnosing EAP writing ability using the reduced reparameterized unified model. Language Testing, 28, 509541.CrossRefGoogle Scholar
Leighton, J., & Gierl, M. (2007). Cognitive diagnostic assessment for education: Theory and applications, Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Li, H. (2011). A cognitive diagnostic analysis of the MELAB reading test. Spaan Fellow, 9, 1746.Google Scholar
Li, H., & Suen, H. K. (2013). Constructing and validating a Q-matrix for cognitive diagnostic analyses of a reading test. Educational Assessment, 18, 125.CrossRefGoogle Scholar
Liu, J., Xu, G., & Ying, Z. (2012). Data-driven learning of Q-matrix. Applied Psychological Measurement, 36, 548564.CrossRefGoogle ScholarPubMed
Liu, J., Xu, G., & Ying, Z. (2013). Theory of the self-learning Q-matrix. Bernoulli, 19, 17901817.CrossRefGoogle ScholarPubMed
Macready, G. B., & Dayton, C. M. (1977). The use of probabilistic models in the assessment of mastery. Journal of Educational Statistics, 33, 379416.Google Scholar
Mislevy, R. J. (1996). Test theory reconceived. Journal of Educational Measurement, 33, 379416.CrossRefGoogle Scholar
Rupp, A. A., Templin, J. L., & Henson, R. A. (2010). Diagnostic measurement. Theory, methods, and applications, New York: Guilford.Google Scholar
Su, Y.-L., Choi, K. M., Lee, W.-C., Choi, T., & McAninch, M. (2013). Hierarchical cognitive diagnostic analysis for TIMSS 2003 mathematics. CASMA Research Report 35. Center for Advanced Studies in Measurement and Assessment (CASMA), University of Iowa.Google Scholar
Tatsuoka, C. (2002). Data analytic methods for latent partially ordered classification models. Journal of the Royal Statistical Society Series C (Applied Statistics), 51, 337350.CrossRefGoogle Scholar
Tatsuoka, K. K. (1983). Rule-space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20, 345354.CrossRefGoogle Scholar
Tatsuoka, K. K. (1984). Analysis of errors in fraction addition and subtraction problems (Report NIE-G-81-0002). Urbana, IL: University of Illinois, Computer-based Education Research Library.Google Scholar
Tatsuoka, K. K. (1985). A probabilistic model for diagnosing misconception in the pattern classification approach. Journal of Educational and Behavioral Statistics, 12, 5573.CrossRefGoogle Scholar
Templin, J. L., & Henson, R. A. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11, 287305.CrossRefGoogle ScholarPubMed
Templin, J., & Bradshaw, L. (2014). Hierarchical diagnostic classification models: A family of models for estimating and testing attribute hierarchies. Psychometrika, 79, 317339.CrossRefGoogle ScholarPubMed
Templin, J., & Hoffman, L. (2013). Obtaining diagnostic classification model estimates using Mplus. Educational Measurement: Issues and Practice, 32, 3750.CrossRefGoogle Scholar
von Davier, M. (2005, September). A general diagnostic model applied to language testing data (Research report No. RR-05-16). Princeton, NJ: Educational Testing Service.CrossRefGoogle Scholar
von Davier, M. (2008). A general diagnostic model applied to language testing data. British Journal of Mathematical and Statistical Psychology, 61, 287301.CrossRefGoogle ScholarPubMed
von Davier, M. (2014). The DINA model as a constrained general diagnostic model: Two variants of a model equivalency. British Journal of Mathematical and Statistical Psychology, 67, 4971.CrossRefGoogle Scholar