Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-01-07T18:45:40.073Z Has data issue: false hasContentIssue false

A New Item Response Theory Modeling Approach with Applications to Unidimensionality Assessment and Ability Estimation

Published online by Cambridge University Press:  01 January 2025

William F. Strout*
Affiliation:
Department of Statistics, University of Illinois at Urbana-Champaign
*
Requests for reprints should be sent to William F. Stout, Department of Statistics, University of Illinois at Urbana-Champaign, 725 South Wright Street, Champaign, IL 61820.

Abstract

Using an infinite item test framework, it is argued that the usual assumption of local independence be replaced by a weaker assumption, essential independence. A fortiori, the usual assumption of unidimensionality is replaced by a weaker and arguably more appropriate statistically testable assumption of essential unidimennsionality. Essential unidimennsionnality implies the existence of a “unique” unidimensional latent ability. Essential unidimensionality is equivalent to the “consistent” estimation of this latnet ability in an ordinal scaling sense using any Balanced empirical scaling. A variation of this estimation approach allows consistent estimation of ability on the given latent ability scale.

Type
Original Paper
Copyright
Copyright © 1990 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

I wish to thank Brian Junker for useful comments and discussions resulting from his careful and thoughtful reading of the manuscript. I wish to thank Charles Davis for useful comments that lead to several improvements, in particular in section 3 and 4. I wish to thank D. R. Divgi for an insight that lead to a useful reformulation of essential independence. Lloyd Humphreys' steadfast insistence that tests items are inherently “multiply determined” provided the empirical grounding for essential dimensionality. Remarks by Mark Reckase and Ming-mei Wang were also helpful.

This research was supported by Office of Naval Reserch Grant N00014-87-K-0277 and National Science Foundation Grant NSF-DMS-88-02556.

References

Cliff, N. (1977). A theory of consisitency of ordering generalizable to tailored testing. Psychometrika, 42, 375399.CrossRefGoogle Scholar
Cliff, N. (1979). Test theory without true scores?. Psychometrika, 44, 373393.CrossRefGoogle Scholar
Cliff, N. (1989). Ordinal consistency and ordinal true scores. Psychometrika, 54, 7592.CrossRefGoogle Scholar
Cliff, N., & Donoghue, J. R. (1990). Ordinal test fidelity estimated from an item sampling model. Unpublished manuscript.Google Scholar
Hambleton, R. K., & Swaminathan, H. (1985). Item response theory: Principles and applications, Boston: Kluwer Nijhoff.CrossRefGoogle Scholar
Holland, P., Junker, B., & Thayer, D. (1987). Recovering the ability distribution from test scores. Unpublished manuscript, Educational Testing Service, Princeton, NJ.Google Scholar
Humphreys, L. (1984). A theoretical and empirical study of the psychometric assessment of psychological test dimensionality and bias, Washington, DC: Office of Naval Research.Google Scholar
Junker, B. (1988). Statistical aspects of a new latent trait model, unpublished doctoral dissertation, University of Illinois at Urbana-Champaign, Department of Statistics.Google Scholar
Lord, F. M. (1980). Applications of item response theory to practical testing problems, Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores, Reading, MA: Addision-Wesley.Google Scholar
McDonald, R. P. (1981). The dimensionality of tests and items. British Journal of Mathematical and Statistical Psychology, 34, 100117.CrossRefGoogle Scholar
McDonld, R. P., & Mulaik, S. A. (1979). Determincy of common factors: A nontechnical review. Psychological Bulletin, 86, 297306.CrossRefGoogle Scholar
Mislevy, R. J. (1987). Recent devcelopments in IRT. In Rothkoph, E. Z. (Eds.), Review of research in education (pp. 239275). Washinton, DC: American Educational Research Association.Google Scholar
Mokken, R., & Lewis, C. (1982). A nonparametric approach to dichotomous item responses. Applied Psychological Measurement, 6, 417430.CrossRefGoogle Scholar
Mulaik, S. A., & McDonald, R. P. (1978). The effect of additional variables on factor indetermancy in models with a single common factor. Psychometrika, 43, 177192.CrossRefGoogle Scholar
Reckase, M. D., Carlson, J. E., Ackerman, T. A., & Spray, J. A. (1986, June). The interpretation of unidimensional IRT parameters when estimated from multidimensional data. Paper presented at the Annual Meeting of the Psychometric Society, Toronto.Google Scholar
Sijtsman, K., & Molenaar, I. (1987). Reliability of test scores in nonparametric item response theory. Psychometrika, 52, 7998.CrossRefGoogle Scholar
Stout, William (1987). A nonnparametric approach for assessing latent trait dimensionality. Psychometrika, 52, 589618.CrossRefGoogle Scholar
Tucker, L. R., Koopman, R. F., & Linn, R. L. (1969). Evaluation of factor analytic research procedures by means of simulated correlation matrices. Psychometrika, 34, 421459.CrossRefGoogle Scholar
Wang, M. (1988). Measurement bias in the application of a unidimensional model to multidimensional item-response data. Unpublished manuscript, Educational Testing Service, Princeton, NJ.Google Scholar