Hostname: page-component-745bb68f8f-d8cs5 Total loading time: 0 Render date: 2025-01-07T18:10:47.093Z Has data issue: false hasContentIssue false

Using Response Times and Response Accuracy to Measure Fluency Within Cognitive Diagnosis Models

Published online by Cambridge University Press:  01 January 2025

Shiyu Wang*
Affiliation:
University of Georgia
Yinghan Chen
Affiliation:
University of Nevada, Reno
*
Correspondence should be made to Shiyu Wang, University of Georgia, Athens, USA. Email: swang44@uga.edu; URL: https://coe.uga.edu/directory/profiles/swang44

Abstract

The recent “Every Student Succeed Act" encourages schools to use an innovative assessment to provide feedback about students’ mastery level of grade-level content standards. Mastery of a skill requires the ability to complete the task with not only accuracy but also fluency. This paper offers a new sight on using both response times and response accuracy to measure fluency with cognitive diagnosis model framework. Defining fluency as the highest level of a categorical latent attribute, a polytomous response accuracy model and two forms of response time models are proposed to infer fluency jointly. A Bayesian estimation approach is developed to calibrate the newly proposed models. These models were applied to analyze data collected from a spatial rotation test. Results demonstrate that compared with the traditional CDM that using response accuracy only, the proposed joint models were able to reveal more information regarding test takers’ spatial skills. A set of simulation studies were conducted to evaluate the accuracy of model estimation algorithm and illustrate the various degrees of model complexities.

Type
Theory and Methods
Copyright
Copyright © 2020 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alberto, P. A., & Troutman, A. C. (2013). Applied behavior analysis for teachers. 6th. Upper Saddle River: Prentice Hall. Google Scholar
Biancarosa, G., & Shanley, L. (2016). What Is Fluency?. In The fluency construct (pp. 1–18). New York, NY: Springer.Google Scholar
Bolsinova, M., Tijmstra, J., & Molenaar, D. (2017a). Response moderation models for conditional dependence between response time and response accuracy. British Journal of Mathematical and Statistical Psychology, 70 (2), 257279. CrossRefGoogle ScholarPubMed
Bolsinova, M., Tijmstra, J., Molenaar, D., & De Boeck, P. (2017b). Conditional dependence between response time and accuracy: An overview of its possible sources and directions for distinguishing between them. Frontiers in Psychology, 8, 202. CrossRefGoogle ScholarPubMed
Cattell, R. B. (1948). Concepts and methods in the measurement of group syntality. Psychological Review, 55 (1), 48 CrossRefGoogle ScholarPubMed
Chen, J., & de la Torre, J. (2013). A general cognitive diagnosis model for expert-defined polytomous attributes. Applied Psychological Measurement, 37 (6), 419437. CrossRefGoogle Scholar
Chiu, C. -Y, & Köhn, H. -F (2015). The reduced RUM as a logit model: Parameterization and constraints. Psychometrika, 81, 350370. CrossRefGoogle ScholarPubMed
Choe, E. M., Kern, J. L., & Chang, H. -H (2018). Optimizing the use of response times for item selection in computerized adaptive testing. Journal of Educational and Behavioral Statistics, 43 (2), 135158. CrossRefGoogle Scholar
Christ, T. J., Van Norman, E. R., & Nelson, P. M. (2016). Foundations of fluency-based assessments in behavioral and psychometric paradigms. In The fluency construct (pp. 143–163). New York, NY: Springer.Google Scholar
Corballis, M. C. (1986). Is mental rotation controlled or automatic? Memory & Cognition, 14 (2), 124128. CrossRefGoogle ScholarPubMed
Culpepper, S. A. (2015). Bayesian estimation of the DINA model with Gibbs sampling. Journal of Educational and Behavioral Statistics, 40 (5), 454476. CrossRefGoogle Scholar
Cummings, K. D., Park, Y., & Bauer Schaper, H. A. (2013). Form effects on dibels next oral reading fluency progress-monitoring passages. Assessment for Effective Intervention, 38 (2), 91104. CrossRefGoogle Scholar
De Boeck, P., Chen, H., & Davison, M. (2017). Spontaneous and imposed speed of cognitive test responses. British Journal of Mathematical and Statistical Psychology, 70 (2), 225237. CrossRefGoogle ScholarPubMed
De Boeck, P., & Jeon, M. (2019). An overview of models for response times and processes in cognitive tests. Frontiers in Psychology, 10, 102. CrossRefGoogle ScholarPubMed
de la Torre, J., & Douglas, J. A. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69 (3), 333353. CrossRefGoogle Scholar
Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52 (3), 219232. CrossRefGoogle ScholarPubMed
Engelhardt, L., & Goldhammer, F. (2019). Validating test score interpretations using time information. Frontiers in Psychology, 10, 1131. CrossRefGoogle ScholarPubMed
Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, CrossRefGoogle Scholar
Goldhammer, F. (2015). Measuring ability, speed, or both? challenges, psychometric solutions, and what can be gained from experimental control. Measurement: Interdisciplinary Research and Perspectives, 13 (3–4), 133164. Google ScholarPubMed
Goldhammer, F., Naumann, J., Stelter, A., Tóth, K., Rölke, H., & Klieme, E. (2014). The time on task effect in reading and problem solving is moderated by task difficulty and skill: Insights from a computer-based large-scale assessment. Journal of Educational Psychology, 106 (3), 608 CrossRefGoogle Scholar
Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25 (3), 258272. CrossRefGoogle Scholar
Kail, R. (1991). Controlled and automatic processing during mental rotation. Journal of Experimental Child Psychology, 51 (3), 337347. CrossRefGoogle ScholarPubMed
Karelitz, T. M. (2004). Ordered category attribute coding framework for cognitive assessments. PhD thesis, University of Illinois at Urbana-Champaign.Google Scholar
Ketterlin-Geller, L. R., & Yovanoff, P. (2009). Diagnostic assessments in mathematics to support instructional decision making. Practical Assessment, Research & Evaluation, 14 (16), 111. Google Scholar
Li, F., Cohen, A., Bottge, B., & Templin, J. (2016). A latent transition analysis model for assessing change in cognitive skills. Educational and Psychological Measurement, 76 (2), 181204. CrossRefGoogle ScholarPubMed
Maris, G., & Van der Maas, H. (2012). Speed-accuracy response models: Scoring rules based on response time and accuracy. Psychometrika, 77 (4), 615633. CrossRefGoogle Scholar
Partchev, I., & De Boeck, P. (2012). Can fast and slow intelligence be differentiated? Intelligence, 40 (1), 2332. CrossRefGoogle Scholar
Petscher, Y., Mitchell, A. M., & Foorman, B. R. (2015). Improving the reliability of student scores from speeded assessments: An illustration of conditional item response theory using a computer-administered measure of vocabulary. Reading and Writing, 28 (1), 3156. CrossRefGoogle ScholarPubMed
Prindle, J. J., Mitchell, A. M., & Petscher, Y. (2016). Using response time and accuracy data to inform the measurement of fluency. In The Fluency Construct (pp. 165–186). New York, NY: Springer.Google Scholar
Samejima, F. (1997). Graded response model. In Handbook of modern item response theory (pp. 85–100). New York, NY: Springer.Google Scholar
Sia, C. J. L., & Lim, C. S. (2018). Cognitive diagnostic assessment: An alternative mode of assessment for learning. In Classroom assessment in mathematics (pp. 123–137). Cham: Springer.Google Scholar
Spearman, C. (1927). The abilities of man, New York: Macmillan. Google Scholar
Su, S., & Davison, M. L. (2019). Improving the predictive validity of reading comprehension using response times of correct item responses. Applied Measurement in Education, 32 (2), 166182. CrossRefGoogle Scholar
Templin, J. L. (2004). Generalized linear mixed proficiency models. Unpublished doctoral dissertation, University of Illinois at Urbana-Champaign.Google Scholar
Thissen, D. (1983). Timed testing: An approach using item response theory. New Horizons in Testing: Latent Trait Test Theory and Computerized Adaptive Testing, Google Scholar
van der Linden, W. J. (2007). A hierarchical framework for modeling speed and accuracy on test items. Psychometrika, 72 (3), 287308. CrossRefGoogle Scholar
van der Linden, W. J. (2009). Predictive control of speededness in adaptive testing. Applied Psychological Measurement, 33 (1), 2541. CrossRefGoogle Scholar
van der Maas, H. L., Molenaar, D., Maris, G., Kievit, R. A., & Borsboom, D. (2011). Cognitive psychology meets psychometric theory: On the relation between process models for decision making and latent variable models for individual differences. Psychological Review, 118 (2), 339CrossRefGoogle ScholarPubMed
Van Der Maas, H. L., Wagenmakers, E. -J, et al. (2005). A psychometric analysis of chess expertise. American Journal of Psychology, 118 (1), 2960. CrossRefGoogle ScholarPubMed
van Rijn, P. W., & Ali, U. S. (2018). A generalized speed—accuracy response model for dichotomous items. Psychometrika, 83 (1), 109131. CrossRefGoogle ScholarPubMed
von Davier, M. (2008). A general diagnostic model applied to language testing data. British Journal of Mathematical and Statistical Psychology, 61 (2), 287307. CrossRefGoogle ScholarPubMed
Wang, C., Xu, G., & Shang, Z. (2016). A two-stage approach to differentiating normal and aberrant behavior in computer based testing. Psychometrika, Google ScholarPubMed
Wang, S., Hu, Y., Wang, Q., Wu, B., Shen, Y., & Carr, M. (2020). The development of a multidimensional diagnostic assessment with learning tools to improve 3-d mental rotation skills. Frontiers in Psychology, 11, 305. CrossRefGoogle ScholarPubMed
Wang, S., Yang, Y., Culpepper, S. A., & Douglas, J. A. (2018a). Tracking skill acquisition with cognitive diagnosis models: A higher-order, hidden markov model with covariates. Journal of Educational and Behavioral Statistics, 43 (1), 5787. CrossRefGoogle Scholar
Wang, S., Zhang, S., Douglas, J., & Culpepper, S. (2018b). Using response times to assess learning progress: A joint model for responses and response times. Measurement: Interdisciplinary Research and Perspectives, 16 (1), 4558. Google Scholar
Wang, S., Zhang, S., & Shen, Y. (2019). A joint modeling framework of responses and response times to assess learning outcomes. Multivariate Behavioral Research, 55 (49), 68Google ScholarPubMed
Zhan, P., Jiao, H., & Liao, D. (2018). Cognitive diagnosis modelling incorporating item response times. British Journal of Mathematical and Statistical Psychology, 71 (2), 262286. CrossRefGoogle ScholarPubMed
Zhan, P., Jiao, H., Liao, D., & Li, F. (2019). A longitudinal higher-order diagnostic classification model. Journal of Educational and Behavioral Statistics, 44 (3), 251281. CrossRefGoogle Scholar