Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-01-07T18:34:35.676Z Has data issue: false hasContentIssue false

The Generalized DINA Model Framework

Published online by Cambridge University Press:  01 January 2025

Jimmy de la Torre*
Affiliation:
Rutgers, The State University of New Jersey
*
Requests for reprints should be sent to Jimmy de la Torre, Department of Educational Psychology, Rutgers, The State University of New Jersey, 10 Seminary Place, New Brunswick, NJ 08901, USA. E-mail: j.delatorre@rutgers.edu

Abstract

The G-DINA (generalized deterministic inputs, noisyandgate) model is a generalization of the DINA model with more relaxed assumptions. In its saturated form, the G-DINA model is equivalent to other general models for cognitive diagnosis based on alternative link functions. When appropriate constraints are applied, several commonly used cognitive diagnosis models (CDMs) can be shown to be special cases of the general models. In addition to model formulation, the G-DINA model as a general CDM framework includes a component for item-by-item model estimation based on design and weight matrices, and a component for item-by-item model comparison based on the Wald test. The paper illustrates the estimation and application of the G-DINA model as a framework using real and simulated data. It concludes by discussing several potential implications of and relevant issues concerning the proposed framework.

Type
Original Paper
Copyright
Copyright © 2011 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

An erratum to this article can be found at http://dx.doi.org/10.1007/s11336-011-9214-8

References

Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In Petrov, B.N., Csaki, F. (Eds.), Proceedings of the second international symposium on information theory (pp. 267281). Budapest: Akad. Kiado.Google Scholar
de la Torre, J. (2008). An empirically-based method of Q-matrix validation for the DINA model: development and applications. Journal of Educational Measurement, 45, 343362.CrossRefGoogle Scholar
de la Torre, J. (2009). A cognitive diagnosis model for cognitively-based multiple-choice options. Applied Psychological Measurement, 33, 163183.CrossRefGoogle Scholar
de la Torre, J. (2009). DINA model and parameter estimation: a didactic. Journal of Educational and Behavioral Statistics, 34, 115130.CrossRefGoogle Scholar
de la Torre, J., Douglas, J. (2004). A higher-order latent trait model for cognitive diagnosis. Psychometrika, 69, 333353.CrossRefGoogle Scholar
de la Torre, J., Douglas, J. (2008). Model evaluation and multiple strategies in cognitive diagnosis: an analysis of fraction subtraction data. Psychometrika, 73, 595624.CrossRefGoogle Scholar
Doornik, J.A. (2003). Object-oriented matrix programming using Ox (version 3.1) [Computer software], London: Timberlake Consultants Press.Google Scholar
Fischer, G.H. (1973). The linear logistic test model as an instrument in educational research. Acta Psychologica, 37, 359374.CrossRefGoogle Scholar
Fischer, G.H. (1997). Unidimensional linear logistic Rasch models. In van der Linden, W., Hambleton, R.K. (Eds.), Handbook of modern item response theory (pp. 225244). New York: Springer.CrossRefGoogle Scholar
Hagenaars, J.A. (1990). Categorical longitudinal data: loglinear panel, trend, and cohort analysis, Thousand Oaks: Sage.Google Scholar
Hagenaars, J.A. (1993). Loglinear models with latent variables, Thousand Oaks: Sage.CrossRefGoogle Scholar
Hartz, S.M. (2002). A Bayesian framework for the Unified Model for assessing cognitive abilities: blending theory with practicality. Unpublished doctoral dissertation.Google Scholar
Henson, R., Templin, J., Willse, J. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74, 191210.CrossRefGoogle Scholar
Jaeger, J., Tatsuoka, C., Berns, S. (2003). Innovative methods for extracting valid cognitive deficit profiles from NP test data in schizophrenia. Schizophrenia Research, 60, 140140.CrossRefGoogle Scholar
Junker, B.W., Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with non-parametric item response theory. Applied Psychological Measurement, 25, 258272.CrossRefGoogle Scholar
Lehmann, E.L., Casella, G. (1998). Theory of point estimation, (2nd ed.). New York: Springer.Google Scholar
Leighton, J.P., Gierl, M.J., Hunka, S. (2004). The attribute hierarchy method for cognitive assessment: a variation on Tatsuoka’s rule-space approach. Journal of Educational Measurement, 41, 205236.CrossRefGoogle Scholar
Maris, E. (1999). Estimating multiple classification latent class models. Psychometrika, 64, 187212.CrossRefGoogle Scholar
Millon, T., Millon, C., Davis, R., Grossman, S. (2006). MCMI-III manual, (3rd ed.). Minneapolis: Pearson Assessments.Google Scholar
Rossi, G., Elklit, A., Simonsen, E. (2010). Empirical evidence for a four factor framework of personality disorder organization: multigroup confirmatory factor analyses of the Millon Clinical Multiaxial Inventory—III personality disorder scales across Belgian and Danish data samples. Journal of Personality Disorders, 24, 128150.CrossRefGoogle ScholarPubMed
Rossi, G., Sloore, H., Derksen, J. (2008). The adaptation of the MCMI-III in two non-English-speaking countries: state of the art of the Dutch language version. In Millon, T., Bloom, C. (Eds.), The Millon inventories: a practitioner’s guide to personalized clinical assessment (pp. 369386). (2nd ed.). New York: Guilford.Google Scholar
Rossi, G., van der Ark, L.A., Sloore, H. (2007). Factor analysis of the Dutch language version of the MCMI-III. Journal of Personality Assessment, 88, 144157.CrossRefGoogle ScholarPubMed
Roussos, L.A., DiBello, L.V., Stout, W., Hartz, S.M., Henson, R.A., Templin, J.L. (2007). The fusion model skills diagnosis system. In Leighton, J.P., Gierl, M.J. (Eds.), Cognitively diagnostic assessment for education: theory and applications (pp. 275318). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Stout, W. (2007). Skills diagnosis using IRT-Based continuous latent trait models. Journal of Educational Measurement, 44, 313324.CrossRefGoogle Scholar
Tatsuoka, C. (2002). Data-analytic methods for latent partially ordered classification models. Journal of the Royal Statistical Society, Series C (Applied Statistics), 51, 337350.CrossRefGoogle Scholar
Tatsuoka, C. (2005). Corrigendum: data analytic methods for latent partially ordered classification models. Journal of the Royal Statistical Society, Series C (Applied Statistics), 54, 465467.CrossRefGoogle Scholar
Tatsuoka, K. (1983). Rule space: an approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20, 345354.CrossRefGoogle Scholar
Tatsuoka, K. (1990). Toward an integration of item-response theory and cognitive error diagnosis. In Frederiksen, N., Glaser, R., Lesgold, A., Safto, M. (Eds.), Monitoring skills and knowledge acquisition (pp. 453488). Hillsdale: Erlbaum.Google Scholar
Templin, J., Henson, R. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11, 287305.CrossRefGoogle ScholarPubMed
von Davier, M. (2005). A general diagnostic model applied to language testing data (ETS Research Report RR-05-16). Princeton: Educational Testing Service.Google Scholar
von Davier, M. (2009). Some notes on the reinvention of latent structure models as diagnostic classification models. Measurement, 7, 6774.Google Scholar
von Davier, M., & Yamamoto, K. (2004, October). A class of models for cognitive diagnosis. Paper presented at the 4th Spearman Conference, Philadelphia, PA.Google Scholar