No CrossRef data available.
Published online by Cambridge University Press: 30 August 2017
Over the years, many in the field of organizational psychology have claimed that meta-analytic tests for moderators provide evidence for validity generalization (Schmidt & Hunter, 1977), a term first used in the middle of the last century (Mosier, 1950). In response, Tett, Hundley, and Christiansen (2017) advise caution when it comes to our inclination toward generalizing findings across workplaces/domains and urge precision in attaching meaning to the statistic we are generalizing. Their focal article was insightful and offers important recommendations for researchers regarding certain statistical indicators of unexplained variability, such as SDρ. In this commentary, we would like to make a different point about SDρ, namely that it, and other statistics based on residual variance, will be deflated due to the lack of variance in moderators. It is this lack of between-study variance, as much as anything else, that leads to misguided conclusions about validity generalization.