Published online by Cambridge University Press: 01 January 2025
While estimation bias is a primary concern in psychological and educational measurement, the standard error is of equal importance in linking key aspects of the assessment structure, especially when the assessment goal concerns the classification of individuals into categories (e.g., master/non-mastery). In this paper, we show analytically how standard error of ability estimates affects expected classification accuracy and consistency when the decision is binary. When standard error decreases, the conditional classification accuracy and consistency increase. Given an examinee population and a cut score, smaller standard error over the entire latent trait continuum guarantees higher overall expected classification accuracy and consistency. We were also able to show the interrelationship between standard error, the expected classification consistency, and reliability. Utilizing the relationship between standard error and expected classification accuracy and consistency, we derive the upper bounds of the overall expected classification accuracy and consistency of a fixed-length computerized adaptive test. The lower bound of the expected classification accuracy and consistency is also derived given a number of stopping rules of variable-length computerized adaptive testing. Implications of these analytical results on operational tests are discussed.