Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-01-07T15:33:10.364Z Has data issue: false hasContentIssue false

A Model-Based Standardization Approach that Separates True Bias/DIF from Group Ability Differences and Detects Test Bias/DTF as well as Item Bias/DIF

Published online by Cambridge University Press:  01 January 2025

Robin Shealy
Affiliation:
Department of Statistics, University of Illinois at Urbana-Champaign
William Stout*
Affiliation:
Department of Statistics, University of Illinois at Urbana-Champaign
*
Requests for reprints should be to William Stout, Department of Statistics, 101 Illini Hall, 725 South Wright Street, Champaign, IL 61820.

Abstract

A model-based modification (SIBTEST) of the standardization index based upon a multidimensional IRT bias modeling approach is presented that detects and estimates DIF or item bias simultaneously for several items. A distinction between DIF and bias is proposed. SIBTEST detects bias/DIF without the usual Type 1 error inflation due to group target ability differences. In simulations, SIBTEST performs comparably to Mantel-Haenszel for the one item case. SIBTEST investigates bias/DIF for several items at the test score level (multiple item DIF called differential test functioning: DTF), thereby allowing the study of test bias/DIF, in particular bias/DIF amplification or cancellation and the cognitive bases for bias/DIF.

Type
Original Paper
Copyright
Copyright © 1993 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This research was partially supported by Office of Naval Research Cognitive and Neural Sciences Grant N0014-90-J-1940, 4421-548 and National Science Foundation Mathematics Grant NSF-DMS-91-01436. The research reported here is collaborative in every respect and the order of authorship is alphabetical. The assistance of Hsin-hung Li and Louis Roussos in conducting the simulation studies was of great help. Discussions with Terry Ackerman, Paul Holland, and Louis Roussos were very helpful.

References

Ackerman, T. (1992). A didactic explanation of item bias, item impact, and item validity from a multidimensional IRT perspective. Journal of Educational Measurement, 29, 6791.CrossRefGoogle Scholar
Ackerman, T. (1992, April). Assessing construct validity using multidimensional item response theory. Paper presented at the 1992 AERA/NCME joint meeting, San Francisco, CA.Google Scholar
Ansley, T. N., Forsyth, R. A. (1985). An examination of the characteristics of unidimensional IRT parameter estimates derived from two-dimensional data. Applied Psychological Measurement, 9, 3748.CrossRefGoogle Scholar
Dorans, N. J. (1992, November). Implications in choice of metric for DIF effect size on decisions about DIF. Paper presented at the 1991 International Symposium on Modern Theories in Measurement, Montebello, Quebec.CrossRefGoogle Scholar
Dorans, N. J., Kulick, E. (1986). Demonstrating the utility of the standardization approach to assessing unexpected differential item performance on the scholastic aptitude test. Journal of Educational Measurement, 23, 355368.CrossRefGoogle Scholar
Drasgow, F. (1987). A study of measurement bias of two standard psychological tests. Journal of Applied Psychology, 72, 1930.CrossRefGoogle Scholar
Fraser, C. (1983). NOHARM II, A Fortran program for fitting unidimensional and multi-dimensional normal ogive models of latent trait theory, Australia: University of New England.Google Scholar
Hambleton, R. K., Rogers, H. J. (1989). Detecting potentially biased test items: Comparison of IRT area and Mantel-Haenszel methods. Applied Measurement in Education, 2, 313334.CrossRefGoogle Scholar
Hambleton, R. K., Swaminanthan, H. (1985). Item response theory: Principles and applications, Boston: Kluwer-Nijhoff Publishing.CrossRefGoogle Scholar
Holland, P. W., Thayer, D. T. (1988). Differential item functioning and the Mantel-Haenszel procedure. In Wainer, H., Braun, H. I. (Eds.), Test validity (pp. 129145). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Kok, F. (1988). Item bias and test multidimensionality. In Langeheine, R., Rost, J. (Eds.), Latent trait and latent models (pp. 263275). New York: Plenum Press.CrossRefGoogle Scholar
Lautenschlager, G., Park, D. (1988). IRT item bias detection procedures: Issues of model mis-specification, robustness, and parameter linking. Applied Psychological Measurement, 12, 365376.CrossRefGoogle Scholar
Linn, R., Levine, M., Hastings, C., Wardrop, J. (1981). Item bias on a test of reading comprehension. Applied Psychological Measurement, 5, 159173.CrossRefGoogle Scholar
Lord, F. M. (1980). Applications of item response theory to practical testing problems, Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Lord, F. M., Novick, M. R. (1968). Statistical theories of mental test scores, Reading, MA: Addison-Wesley.Google Scholar
Mellenbergh, G. J. (1982). Contingency table methods for assessing item bias. Journal of Educational Statistics, 7, 105118.CrossRefGoogle Scholar
Meredith, W., Millsap, R. E. (1992). On the misuse of manifest variables in the detection of measurement bias. Psychometrika, 57, 289311.CrossRefGoogle Scholar
Millsap, R. E., & Meredith, W. (1989, July). The detection of DIF: Why there is no free lunch. Paper presented at the Annual Meeting of the Psychometric Society, University of California at Los Angeles.Google Scholar
Mislevy, R. J., Bock, R. D. (1984). Item operating characteristics of the Armed Services Aptitude Battery (ASVAB). Form 8A, Washington, DC: Office of Naval Research.Google Scholar
Nandakumar, R. (in press). Simultaneous DIF amplification and cancellation: Shealy-Stout's test for DIF. Journal of Educational Measurement.Google Scholar
Raju, N. S., van der Linden, W. J., & Fleer, P. J. (1992, April). An IRT-based internal measure of test bias with applications for differential item functioning. Paper presented at the 1992 AERA meeting, San Francisco, CA.Google Scholar
Reckase, M. D. (1992, April). Mathematics test item formats versus the skill being assessed: A brief review. Paper presented at the 1992 NCME Meeting, San Francisco, CA.Google Scholar
Roussos, L. (1993). Simulation studies of effects of small sample size and studied item parameters on SIBTEST and Mantel-Haenzel Type 1 error performance, Champaign, IL: University of Illinois.Google Scholar
Shealy, R. T. (1989). An item response theory-based statistical procedure for detecting concurrent internal bias in ability tests. Unpublished doctoral dissertation, Department of Statistics, University of Illinois, Urbana-Champaign.Google Scholar
Shealy, R. T., Stout, W. F. (1991). An item response theory model for test bias, Washington, DC: Office of Naval Research.Google Scholar
Shealy, R. T., Stout, W. F. (1991). A procedure to detect test bias present simultaneously in several items, Washington, DC: Office of Naval Research.CrossRefGoogle Scholar
Shealy, R. T., Stout, W. F. (1993). An item response theory model for test bias and differential test functioning. In Holland, P., Wainer, H. (Eds.), Differential item functioning (pp. 197240). Hillsdale, NJ: Erlbaum.Google Scholar
Stout, W. F. (1987). A nonparametric approach for assessing latent trait unidimensionality. Psychometrika, 52, 589617.CrossRefGoogle Scholar
Swaminathan, H., Rogers, H. J. (1990). Detecting differential item functioning using logistic regression procedures. Journal of Educational Measurement, 27, 361370.CrossRefGoogle Scholar
Wainer, H. (1993). Model-based standardized measurement of an item's differential impact. In Holland, P., Wainer, H. (Eds.), Differential item functioning: theory and practice (pp. 123136). Hillsdale, NJ: Erlbaum.Google Scholar
Zwick, R. (1990). When do item response function and Mantel-Haenszel definitions of differential item functioning coincide?. Journal of Educational Statistics, 15, 185197.CrossRefGoogle Scholar