Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-01-07T18:59:17.325Z Has data issue: false hasContentIssue false

Hierarchical Diagnostic Classification Models: A Family of Models for Estimating and Testing Attribute Hierarchies

Published online by Cambridge University Press:  01 January 2025

Jonathan Templin*
Affiliation:
University of Kansas
Laine Bradshaw
Affiliation:
University of Georgia
*
Requests for reprints should be sent to Jonathan Templin, Department of Psychology and Research in Education, University of Kansas, 1122 West Campus Rd., Joseph R. Pearson Hall, Room 621, Lawrence, KS 66045, USA. E-mail: jtemplin@ku.edu

Abstract

Although latent attributes that follow a hierarchical structure are anticipated in many areas of educational and psychological assessment, current psychometric models are limited in their capacity to objectively evaluate the presence of such attribute hierarchies. This paper introduces the Hierarchical Diagnostic Classification Model (HDCM), which adapts the Log-linear Cognitive Diagnosis Model to cases where attribute hierarchies are present. The utility of the HDCM is demonstrated through simulation and by an empirical example. Simulation study results show the HDCM is efficiently estimated and can accurately test for the presence of an attribute hierarchy statistically, a feature not possible when using more commonly used DCMs. Empirically, the HDCM is used to test for the presence of a suspected attribute hierarchy in a test of English grammar, confirming the data is more adequately represented by hierarchical attribute structure when compared to a crossed, or nonhierarchical structure.

Type
Original Paper
Copyright
Copyright © 2013 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bradshaw, L.P., Templin, J. (2013). Combining scaling and classification: a psychometric model for scaling ability and diagnosing misconceptions. Psychometrika,Google ScholarPubMed
Buck, G., Tatsuoka, K.K. (1998). Application of the rule-space procedure to language testing: examining attributes of a free response listening test. Language Testing, 15, 119157CrossRefGoogle Scholar
Choi, H.-J. (2010). A model that combines diagnostic classification assessment with mixture item response theory models. Unpublished doctoral dissertation, University of Georgia, Athens, Georgia. Google Scholar
Croon, M. (1990). Latent class analysis with ordered latent classes. British Journal of Mathematical & Statistical Psychology, 43, 171192CrossRefGoogle Scholar
Cui, Y., Leighton, J.P. (2009). The hierarchy consistency index: evaluating person fit for cognitive diagnostic assessment. Journal of Educational Measurement, 46, 429449CrossRefGoogle Scholar
de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76, 179199CrossRefGoogle Scholar
Gierl, M. J., Cui, Y., & Hunka, S. (2007a). Using connectionist models to evaluate examinees’ response patterns on tests. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, IL. Google Scholar
Gierl, M.J., Leighton, J.P., Hunka, S.M. (2007). Using the attribute hierarchy method to make diagnostic inferences about respondents’ cognitive skills. In Leighton, J.P., Gierl, M.J. (Eds.), Cognitive diagnostic assessment for education: theory and applications, Cambridge: Cambridge University Press 242274CrossRefGoogle Scholar
Gierl, M.J., Cui, Y., Zhou, J. (2009). Reliability of attribute-based scoring in cognitive diagnostic assessment. Journal of Educational Measurement, 46, 293313CrossRefGoogle Scholar
Haertel, E.H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26, 333352CrossRefGoogle Scholar
Hartz, S. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: blending theory with practicality. Unpublished doctoral dissertation, University of Illinois at Urbana-Champaign. Google Scholar
Henson, R., & Templin, J. (2005). Hierarchical log-linear modeling of the joint skill distribution. Unpublished manuscript. Google Scholar
Henson, R., & Templin, J. (2007). Large-scale language assessment using cognitive diagnosis models. Paper presented at the annual meeting of the National Council for Measurement in Education in Chicago, Illinois. Google Scholar
Henson, R., Templin, J., Willse, J. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74, 191210CrossRefGoogle Scholar
Junker, B.W., Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258272CrossRefGoogle Scholar
Lazarsfeld, P.F., Henry, N.W. (1968). Latent structure analysis, Boston: Houghton MifflinGoogle Scholar
Leighton, J.P., Gierl, M.J. (2007). Cognitive diagnostic assessment for education: theory and applications, Cambridge: Cambridge University PressCrossRefGoogle Scholar
Leighton, J.P., Gierl, M.J., Hunka, S.M. (2004). The attribute hierarchy model for cognitive assessment: a variation on Tatsuoka’s rule-space approach. Journal of Educational Measurement, 41, 205237CrossRefGoogle Scholar
Leighton, J.P., Cui, Y., Corr, M.K. (2009). Testing expert-based and student-based cognitive models: an application of the attribute hierarchy method and hierarchy consistency index. Applied Measurement in Education, 22, 229254CrossRefGoogle Scholar
Lindsay, B., Clogg, C.C., Grego, J. (1991). Semiparametric estimation in the Rasch model and related exponential response models, including a simple latent class model for item analysis. Journal of the American Statistical Association, 86, 96107CrossRefGoogle Scholar
Maris, E. (1999). Estimating multiple classification latent class models. Psychometrika, 64, 197212CrossRefGoogle Scholar
Maydeu-Olivares, A., Joe, H. (2005). Limited- and full-information estimation and goodness-of-fit testing in 2n contingency tables: a unified framework. Journal of the American Statistical Association, 100, 10091020CrossRefGoogle Scholar
Muthén, L.K., Muthén, B.O. (2013). Mplus user’s guide (Version 6.1) [Computer software and manual], Los Angeles: Muthén & MuthénGoogle Scholar
Roussos, L., DiBello, L., Stout, W., Hartz, S., Henson, R., Templin, J. (2007). The fusion model skills diagnosis system. In Leighton, J., Gierl, M. (Eds.), Cognitive diagnostic assessment in education, New York: Cambridge University Press 275318CrossRefGoogle Scholar
Rupp, A., Templin, J., Henson, R. (2010). Diagnostic measurement: theory, methods, and applications, New York: GuilfordGoogle Scholar
Shapiro, A. (1985). Asymptotic distribution of test statistics in the analysis of moment structures under inequality constraints. Biometrika, 72, 133144CrossRefGoogle Scholar
Stoel, R.D., Garre, F.G., Dolan, C., van den Wittenboer, G. (2006). On the likelihood ratio test in structural equation modeling when parameters are subject to boundary constraints. Psychological Methods, 11, 439455CrossRefGoogle ScholarPubMed
Tatsuoka, K.K. (1983). Rule space: an approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20, 345354CrossRefGoogle Scholar
Tatsuoka, K.K. (1990). Toward an integration of item-response theory and cognitive error diagnosis. In Fredrickson, N., Glaser, R.L., Lesgold, A.M., Shafto, M.G. (Eds.), Diagnostic monitoring of skills and knowledge acquisition, Hillsdale: Erlbaum 453488Google Scholar
Tatsuoka, K.K. (1993). Item construction and psychometric models appropriate for constructed responses. In Bennett, R.E., Ward, W.C. (Eds.), Construction versus choice in cognitive measurement, Hillsdale: Erlbaum 107133Google Scholar
Tatsuoka, K.K. (1995). Architecture of knowledge structures and cognitive diagnosis: a statistical pattern recognition and classification approach. In Nichols, P.D., Chipman, S.F., Brennan, R.L. (Eds.), Cognitively diagnostic assessment, Hillsdale: Erlbaum 327359Google Scholar
Tatsuoka, K.K. (2009). Cognitive assessment: an introduction to the rule space method, New York: RoutledgeCrossRefGoogle Scholar
Templin, J. (2006). CDM user’s guide. Unpublished manuscript. Google Scholar
Templin, J., Bradshaw, L. (2013). Measuring the reliability of diagnostic classification model examinee estimates. Journal of Classification, 30, 251275CrossRefGoogle Scholar
Templin, J., Henson, R. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11(3), 287305CrossRefGoogle ScholarPubMed
Templin, J., Hoffman, L. (2013). Obtaining diagnostic classification model estimates using Mplus. Educational Measurement: Issues and Practice, 32(2), 3750CrossRefGoogle Scholar
Templin, J., Rupp, A., Henson, R., Jang, E., & Ahmed, M. (2008). Nominal response diagnostic models. Paper presented at the annual meeting of the National Council on Measurement in Education in New York, NY. Google Scholar
Verbeke, G., Molenberghs, G. (2000). Linear mixed models for longitudinal data, New York: SpringerGoogle Scholar
von Davier, M. (2005). A general diagnostic model applied to language testing data (ETS Research Report RR-05-16). CrossRefGoogle Scholar
von Davier, M., & Yamamoto, K. (2004). A class of models for cognitive diagnosis. Paper presented at the 4th Spearman Conference in Philadelphia, PA. Google Scholar
Xu, X., von Davier, M. (2008). Fitting the structured general diagnostic model to NAEP data (RR-08-27), Princeton: Educational Testing ServiceGoogle Scholar