Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-01-07T18:00:41.346Z Has data issue: false hasContentIssue false

Using Item-Specific Instructional Information in Achievement Modeling

Published online by Cambridge University Press:  01 January 2025

Bengt O. Muthén*
Affiliation:
Graduate School of Education, University of California, Los Angeles
*
Requests for reprints should be sent to Bengt O. Muthén, Graduate School of Education, 405 Hilgard Avenue, University of California, Los Angeles, Los Angeles, CA 90024-1521.

Abstract

The problem of detecting instructional sensitivity (“item basis”) in test items is considered. An illustration is given which shows that for tests with many biased items, traditional item bias detection schemes give a very poor assessment of bias. A new method is proposed instead. This method extends item response theory (IRT) by including item-specific auxiliary measurement information related to opportunity-to-learn. Item-specific variation in measurement relations across students with varying opportunity-to-learn is allowed for.

Type
Original Paper
Copyright
Copyright © 1989 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This paper was presented at the 1987 AERA meeting in Washington, DC. This research was supported by grant OERI-G-86-003 from the Office of Educational Research and Improvement, Department of Education. The author thanks Michael Hollis and Chih-fen Kao for valuable research assistance, and appreciates valuable comments made by an anonymous reviewer.

References

Airasian, P. W., & Madaus, G. F. (1983). Linking testing and instruction. Journal of Educational Measurement, 20, 103118.CrossRefGoogle Scholar
Anderson, L. W. (1985). Opportunity to learn. In Husen, T. & Postlethwaite, T. N. (Eds.), The international encyclopedia of education, Oxford: Pergamon Press.Google Scholar
Ashford, J. R., & Sowden, R. R. (1970). Multivariate probit analysis. Biometrics, 26, 535546.CrossRefGoogle Scholar
Birenbaum, M., & Tatsuoka, M. (1983). The effect of scoring systems based on the algorithm underlying the student's response patterns on the dimensionality of achievement test data of the problem solving type. Journal of Educational Measurement, 20, 1726.CrossRefGoogle Scholar
Crosswhite, F. J., Dossey, J. A., Swafford, J. O., McKnight, C. C., & Cooney, T. J. (1985). Second international mathematics study summary report for the United States, Champaign, IL: Stipes.Google Scholar
Engelhard, G. (1986, June). Curriculum-based estimates of student achievement. Paper presented at the annual meeting of the Psychometric Society in Toronto, Canada.Google Scholar
Fisher, G. H. (1972). A measurement model for the effect of mass-media. Acta Psychologica, 36, 207220.CrossRefGoogle Scholar
Haertel, E., & Calfee, R. (1983). School achievement: Thinking about what to test. Journal of Educational Measurement, 20, 119132.CrossRefGoogle Scholar
Linn, R. L., & Harnisch, D. L. (1981). Interactions between item content and group membership. Journal of Educational Measurement, 18, 109118.CrossRefGoogle Scholar
Linn, R. L., Levine, M. V., Hastings, C. N., & Wardrop, J. L. (1981). Item bias in a test of reading comprehension. Applied Psychological Measurement, 5, 159173.CrossRefGoogle Scholar
Lord, F. M. (1980). Applications of item response theory to practical testing problems, Hillsdale, NJ: Erlbaum.Google Scholar
Mehrens, W. A., & Phillips, S. E. (1986). Detecting impacts of curricular differences in achievement test data. Journal of Educational Measurement, 23, 185196.CrossRefGoogle Scholar
Miller, M. D. (1986). Time allocation and patterns of item response. Journal of Educational Measurement, 23, 147156.CrossRefGoogle Scholar
Miller, M. D., & Linn, R. L. (1988). Invariance of item parameters with variations in instructional coverage. Journal of Educational Measurement, 25, 205219.CrossRefGoogle Scholar
Mislevy, R. J. (1987). Exploiting auxiliary information about examinees in the estimation of item parameters. Applied Psychological Measurement, 11, 8191.CrossRefGoogle Scholar
Muthén, B. (1978). Contributions to factor analysis of dichotomous variables. Psychometrika, 43, 551560.CrossRefGoogle Scholar
Muthén, B. (1979). A structural probit model with latent variables. Journal of the American Statistical Association, 74, 807811.Google Scholar
Muthén, B. (1983). Latent variable structural equation modeling with categorical data. Journal of Econometrics, 22, 4365.CrossRefGoogle Scholar
Muthén, B. (1984). A general structural equation model with dichotomous, ordered categorical, and continuous latent variable indicators. Psychometrika, 49, 115132.CrossRefGoogle Scholar
Muthén, B. (1987). Some uses of structural equation modeling in validity studies: Extending IRT to external variables. In Wainer, H. & Braun, H. (Eds.), Test validity (pp. 213238). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Muthén, B. (1987). LISCOMP. Analysis of linear structural equations with a comprehensive measurement model. Users' guide, Mooresville, IN: Scientific Software.Google Scholar
Muthén, B., Kao, C-F., & Burstein, L. (1988). Instructional sensitivity in mathematics achievement test items: Application of a new IRT-based detection technique, Los Angeles: University of California, Los Angeles, Graduate School of Education (Accepted for publication in the Journal of Educational Measurement)Google Scholar
Phillips, S. E., & Mehrens, W. A. (1988). Effects of curricular differences on achievement test data at item and objective levels. Applied Measurement in Education, 1, 3351.CrossRefGoogle Scholar
Traub, R. E., & Wolfe, R. G. (1981). Latent trait theories and the assessment of education achievement. In Berliner, D. C. (Ed.), Review of research in education.Google Scholar