Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-01-07T15:54:36.948Z Has data issue: false hasContentIssue false

Signal/Noise Ratios for Domain-Referenced Tests

Published online by Cambridge University Press:  01 January 2025

Robert L. Brennan*
Affiliation:
The American College Testing Program
Michael T. Kane
Affiliation:
National League for Nursing
*
Requests for reprints should be sent to Dr. Robert L. Brennan, Research and Development Division, The American College Testing Program, P. O. Box 168, Iowa City, Iowa 52240.

Abstract

Using the assumption of randomly parallel tests and concepts from generalizability theory, three signal/noise ratios for domain-referenced tests are developed, discussed, and compared. The three ratios have the same noise but different signals depending upon the kind of decision to be made as a result of measurement. It is also shown that these ratios incorporate a definition of noise or error which is different from the classical definition of noise typically used to characterize norm-referenced tests.

Type
Original Paper
Copyright
Copyright © 1977 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Part of the work leading to this paper was accomplished while the authors were assistant professors at the State University of New York at Stony Brook.

References

Reference Notes

Brennan, R. L. The evaluation of mastery test items, 1974, Stony Brook, N.Y.: State University of New York at Stony Brook.Google Scholar
Harris, M. L., & Stewart, D. M.Application of classical strategies to criterion-referenced test construction. Paper presented at the annual meeting of the American Educational Research Association, New York, 1971.Google Scholar
Marshall, J. L., & Haertel, E. H.A single-administration reliability index for criterion-referenced tests: The mean split-half coefficient of agreement. Paper presented at the annual meeting of the American Educational Research Association, Washington, D. C., April, 1975.Google Scholar

References

American Psychological Association. Standards for educational & psychological tests, rev. ed., Washington, D. C.: American Psychological Association, 1974.Google Scholar
Brennan, R. L., & Kane, M. T.An index of dependability for mastery tests. Journal of Educational Measurement, in press.Google Scholar
Carver, R. P. Special problems in measuring change with psychometric devices. Evaluative research: Strategies and methods, 1970, Pittsburgh: American Institutes for Research.Google Scholar
Cochran, W. G. Sampling techniques, 2nd ed., New York: Wiley, 1963.Google Scholar
Cronbach, L. J., & Gleser, G. C. The signal/noise ratio in the comparison of reliability coefficients. Educational and Psychological Measurement, 1964, 24, 467480.CrossRefGoogle Scholar
Cronbach, L. J., Gleser, G. C., Nanda, H., & Rajaratnam, N. The dependability of behavioral measurements: Theory of generalizability for scores and profiles, 1972, New York: Wiley.Google Scholar
Glaser, R. Instructional technology and the measurement of learning outcomes: Some questions. American Psychologist, 1963, 18, 519521.CrossRefGoogle Scholar
Gulliksen, H. Theory of mental tests, 1950, New York: Wiley.CrossRefGoogle Scholar
Hambleton, R. K., & Novick, M. R. Toward an integration of theory and method for criterion-referenced tests. Journal of Educational Measurement, 1973, 10, 159170.CrossRefGoogle Scholar
Harris, C. W. An interpretation of Livingston's reliability coefficient for criterion-referenced tests. Journal of Educational Measurement, 1972, 9, 2729.CrossRefGoogle Scholar
Hively, W. Introduction to domain-referenced testing. In Hively, W.(Eds.), Domain-referenced testing, 1974, Englewood Cliffs, N. J.: Educational Technology Publications.Google Scholar
Huynh, H. On the reliability of decisions in domain-referenced testing. Journal of Educational Measurement, 1976, 13, 253264.CrossRefGoogle Scholar
Jackson, R. W. B. Reliability of mental tests. British Journal of Psychology, 1939, 29, 267287.Google Scholar
Kriewall, T. E. Application of information theory and acceptance sampling principles to the management of mathematics instruction. (Doctoral dissertation, University of Wisconsin, 1969). Dissertation Abstracts International, 1970, 30, p. 5344A. (University Microfilms, No. 69-22, 417).Google Scholar
Livingston, S. A. A criterion-referenced application of classical test theory. Journal of Educational Measurement, 1972, 9, 1326.CrossRefGoogle Scholar
Lord, F. M. Do tests of the same length have the same standard error of measurement?. Educational and Psychological Measurement, 1957, 17, 510521.CrossRefGoogle Scholar
Lord, F. M. Test reliability: A correction. Educational and Psychological Measurement, 1962, 22, 511512.CrossRefGoogle Scholar
Lord, F. M., & Novick, M. R. Statistical theories of mental test scores, 1968, Reading, Mass.: Addison-Wesley.Google Scholar
Millman, J. Criterion-referenced measurement. In Popham, W. J.(Eds.), Evaluation in education, 1974, Berkeley, Calif.: McCutchan.Google Scholar
Popham, W. J., & Husek, T. R. Implications of criterion-referenced measurement. Journal of Educational Measurement, 1969, 6, 19.CrossRefGoogle Scholar
Subkoviak, M. J. Estimating reliability from a single administration of a mastery test. Journal of Educational Measurement, 1976, 13, 265276.CrossRefGoogle Scholar
Swaminathan, H., Hambleton, R. K., & Algina, J. Reliability of criterion-referenced tests: A decision theoretic formulation. Journal of Educational Measurement, 1974, 11, 263267.CrossRefGoogle Scholar
Woodbury, M. A. On the standard length of a test. Psychometrika, 1951, 16, 103106.CrossRefGoogle Scholar