Hostname: page-component-5f745c7db-tvc9f Total loading time: 0 Render date: 2025-01-06T07:53:44.974Z Has data issue: true hasContentIssue false

Specifying Optimum Examinees for Item Parameter Estimation in Item Response Theory

Published online by Cambridge University Press:  01 January 2025

Martha L. Stocking*
Affiliation:
Educational Testing Service
*
Requests for reprints should be sent to Martha L. Stocking, Mail Stop 03-T, Educational Testing Service, Princeton, NJ 08541.

Abstract

Information functions are used to find the optimum ability levels and maximum contributions to information for estimating item parameters in three commonly used logistic item response models. For the three and two parameter logistic models, examinees who contribute maximally to the estimation of item difficulty contribute little to the estimation of item discrimination. This suggests that in applications that depend heavily upon the veracity of individual item parameter estimates (e.g. adaptive testing or text construction), better item calibration results may be obtained (for fixed sample sizes) from examinee calibration samples in which ability is widely dispersed.

Type
Original Paper
Copyright
Copyright © 1990 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This work was supported by Contract No. N00014-83-C-0457, project designation NR 150-520, from Cognitive Science Program, Cognitive and Neural Sciences Division, Office of Naval Research and Educational Testing Service through the Program Research Planning Council. Reproduction in whole or in part is permitted for any purpose of the United States Government. The author wishes to acknowledge the invaluable assistance of Maxine B. Kingston in carrying out this study, and to thank Charles Lewis for his many insightful comments on earlier drafts of this paper.

References

Cook, L. L., Petersen, N. L., & Stocking, M. L. (1983). IRT versus conventional equating methods: A comparative study of scale stability. Journal of Educational Statistics, 8, 136156.Google Scholar
Cox, D. R. (1970). The analysis of binary data, London: Methuen.Google Scholar
Kendall, M., & Stuart, A. (1979). The advanced theory of statistics. Volume 2. Inference and Relationship, New York: Macmillan.Google Scholar
Lord, F. M. (1968). An analysis of the Verbal Scholastic Aptitude Test using Birnbaum's three-parameter logistic model. Educational and Psychological Measurement, 28, 9891020.CrossRefGoogle Scholar
Lord, F. M. (1980). Applications of item response theory to practical testing problems, Hillsdale, NJ: Erlbaum.Google Scholar
Lord, F. M., & Wingersky, M. S. (1985). Sampling variances and covariances of parameter estimates in item response theory. In Weiss, D. J. (Eds.), Proceedings of the 1982 IRT/CAT Conference (Ed.), (pp. 6988). Minneapolis, MN: University of Minnesota, Department of Psychology, CAT Laboratory.Google Scholar
Mislevy, R. J., & Sheehan, K. M. (1988). The information matrix in latent variable models, Princeton, NJ: Educational Testing Service.CrossRefGoogle Scholar
Stocking, M. L. (1988). Scale drift in on-line calibration (Report 88-28), Princeton, NJ: Educational Testing Service.CrossRefGoogle Scholar
Thissen, D., & Wainer, H. (1982). Some standard errors in item response theory. Psychometrika, 47, 397412.CrossRefGoogle Scholar
Wingersky, M. S., & Lord, F. M. (1984). An investigation of methods for reducing sampling error in certain IRT procedures. Applied Psychological Measurement, 8, 347364.CrossRefGoogle Scholar
van der Linden, W. J., & Boekkooi-Timminga, E. (1989). A maximin model for test design with practical constraints. Psychometrika, 54, 237247.CrossRefGoogle Scholar