Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-01-07T19:18:25.907Z Has data issue: false hasContentIssue false

The Relationship between Factorial Composition of Test Items and Measures of Test Reliability

Published online by Cambridge University Press:  01 January 2025

John W. Cotton
Affiliation:
Northwestern University
Donald T. Campbell
Affiliation:
Northwestern University
R. Daniel Malone
Affiliation:
Northwestern University

Abstract

For continuous distributions associated with dichotomous item scores, the proportion of common-factor variance in the test, H2, may be expressed as a function of intercorrelations among items. H2 is somewhat larger than the coefficient a except when the items have only one common factor and its loadings are restricted in value. The dichotomous item scores themselves are shown not to have a factor structure, precluding direct interpretation of the Kuder-Richardson coefficient, rK-R, in terms of factorial properties. The value of rK-R is equal to that of a coefficient of equivalence, H2Φ, when the mean item variance associated with common factors equals the mean interitem covariance. An empirical study with synthetic test data from populations of varying factorial structure showed that the four parameters mentioned may be adequately estimated from dichotomous data.

Type
Original Paper
Copyright
Copyright © 1957 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

This study was supported in part by an Air Force project (Contract Number AF18(600-170), monitored by the Crew Research Laboratory, Air Force Personnel and Training Research Center, Randolph Air Force Base, Randolph Field, Texas. Permission is granted for reproduction, translation, publication, use and disposal in whole and in part by or for the United States Government. Further support was given by the Northwestern University Graduate School. The computational assistance of Mr. Norman Miller is acknowledged. Professor Meyer Dwass provided mathematical advice both directly and indirectly relevant to the paper.

References

Chesire, L., Saffir, M., and Thurstone, L. L. Computing diagrams for the tetrachoric correlation coefficient, Chicago: Univ. of Chicago Bookstore, 1933Google Scholar
Cronbach, L. J. Test “reliability”: its meaning and determination. Psychometrika, 1947, 12, 116CrossRefGoogle ScholarPubMed
Cronbach, L. J. Coefficient alpha and the internal structure of tests. Psychometrika, 1951, 16, 297334CrossRefGoogle Scholar
Dixon, W. J. and Massey, F. J. Jr. Introduction to statistical analysis, New York: McGraw-Hill, 1951Google Scholar
Ferguson, G. A. The factorial interpretation of test difficulty. Psychometrika, 1941, 6, 323329CrossRefGoogle Scholar
Jackson, R. W. B. and Ferguson, G. A. Studies on the reliability of tests. Toronto: Department of Educational Research, Bulletin No. 12, 1941.Google Scholar
Kuder, G. F. and Richardson, M. W. The theory of the estimation of test reliability. Psychometrika, 1937, 2, 151166CrossRefGoogle Scholar
Thurstone, L. L. Multiple-factor analysis, Chicago: Univ. of Chicago Press, 1947Google Scholar
Wherry, R. J. and Gaylord, R. H. The concept of test and item reliability in relation to factor-pattern. Psychometrika, 1943, 8, 247269CrossRefGoogle Scholar
Wilks, S. S. Mathematical statistics, Princeton: Princeton Univ. Press, 1943Google Scholar