Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-26T20:37:52.900Z Has data issue: false hasContentIssue false

Ideal solutions don’t necessarily inform reality

Published online by Cambridge University Press:  31 August 2023

P. D. Harms*
Affiliation:
Management Department, University of Alabama, Tuscaloosa, AL, USA
Jeffrey L. Foster
Affiliation:
Psychology Department, Missouri State University, Springfield, MO, USA
Bradley J. Brummel
Affiliation:
Department of Psychology, The University of Tulsa, Tulsa, OK, USA
*
Corresponding author: P. D. Harms; Email: pdharms@cba.ua.edu
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

We applaud the efforts of Sackett et al. (Reference Sackett, Zhang, Berry and Lievens2023) to update research that is often uncritically cited and discussed in the field of I–O psychology, as well as their sensible recommendations and the overall thoughtfulness of their paper. However, like much published research in this area, we believe it falls short in its aim of informing selection practices in most organizations. Sackett et al.’s (Reference Sackett, Zhang, Berry and Lievens2022) update to Schmidt and Hunter (Reference Schmidt and Hunter1998) falls into the same trap as prior meta-analyses in that their statistically corrected results of aggregated studies designed for validating selection measures and subsequent recommendations seem to assume an ideal (perhaps even an imaginary) world. Specifically, as with nearly all academic articles and textbooks on selection, their results are framed in a context where organizations have copious amounts of financial resources, extended periods of time, access to technology, and readily available applicant pools with numerous applicants that vary substantially in terms of their job-relevant characteristics (e.g., personality, abilities, interests, etc.). However, HR practitioners are more frequently faced with limited resources, limited time to make hires, limited available technology (and experience using it), and severely limited applicant pools. Moreover, these limitations are likely to shift over time with changes in the ambient economic climate, the strategic priorities of organizational leadership, the success and reputation of their organization, and technological progress.

So, when confronted with optimized models based on corrected estimates making recommendations about best practices in selection, we find ourselves asking not only how an HR practitioner is supposed to make use of this information but whether they should consider it at all. Although such articles are useful for providing hypothetical benchmarks when academics seek to inform practitioner choices, we should not only consider what selection would look like in ideal circumstances but also what is possible in a reality-constrained world. It is our position that I–O psychology would be better served if we study how selection works in practice and try to meet practitioners halfway in terms of making our recommendations reflect the limitations HR practitioners face.Footnote 1

We freely admit that we don’t have ready-made solution to these real-world limitations and problems, but we hope that this comment can serve as a foundation for a more practice-oriented stream of research in I–O psychology. Therefore, the following represent an incomplete list of issues and considerations that I–O academics may want to address in future research concerning best practices in selection.

  1. 1. Consider budget constraints. In many organizations, HR functions are considered an opportunity cost, and HR staff are provided with limited budgets. Rather than always trying to identify an ideal set of predictors for optimizing prediction, it would make sense to also consider what utility in prediction could be gained most efficiently in terms of cost. These costs should include more than just the per applicant rate for a test and also include the personnel and infrastructure needed internally and externally to implement the system.

  2. 2. Consider time constraints. Much like budget constraints, organizations usually do not have unlimited time to devote to running selection systems and often face demands to fill large numbers of positions very quickly. They must also consider the length of the selection cycle itself (i.e., how long it takes to fill a position once they have posted it). It would be useful, therefore, to examine how to maximize prediction in a limited amount of time.

  3. 3. Consider the screening process. Most I–O psychologists are aware that a critical method for reducing time and cost in selection is to use a multihurdle selection process. Insightful and economically minded HR practitioners will be inclined to use inexpensive, scalable predictors first even if they are not necessarily the most valid or useful initial screen according to optimized models. Or they may use information from one part of the selection process to inform subsequent steps, such as referring to information in a person’s résumé or their assessment results during a later interview. Research should examine the sequencing and/or combination of different selection procedures to help organizations maximize the validity and utility of the process they use. Furthermore, much like Sackett et al., does when adding Cohen’s d for Black and White differences to their Table 1, adding information related to factors such as cost and time estimates to validity estimates would not only highlight the importance of these factors but also provide valuable information for designing and evaluating selection systems.

  4. 4. Consider the nature of job performance and the job itself. It is widely acknowledged that job performance is multifaceted (i.e., core functions, citizenship behaviors, safety behaviors, workplace deviance, etc.). These various performance outcomes can and do have different sets of predictors. So trying to identify an ideal set is likely to mislead. Indeed, when predictors of performance are theoretically aligned with performance criterion, the estimated predictive power of many variables increases substantially (see Hogan & Holland, Reference Hogan and Holland2003). It might sometimes be more useful to focus on predicting aspects of performance that represent critical talent gaps or needs at a particular point in time. Hiring success also includes longevity and promotion within the company. However, these outcomes are not captured in most performance studies. Research should focus more on predicting specific aspects of job performance and other outcomes important to organizations, such as tenure and promotion, and not just on overall performance ratings provided by managers.

  5. 5. Consider different goals, such as avoiding catastrophic hires. Sufficiency, not optimization, is often the goal in hiring for many organizations. Therefore, it may be more appropriate at times to focus more on avoiding problematic hires rather than assuming the goal is always to try to identify the very best candidates from a seemingly unlimited applicant pool. The growing literature (e.g., Boddy et al., Reference Boddy, Boulter, Fishwick and Örtenblad2021; Kusy & Holloway, Reference Kusy and Holloway2009) on the cost of toxic employees and leaders who are good at gaming performance systems at the cost of their coworkers performance and well-being is an important issue to consider here and warrants more attention in the area of selection.

  6. 6. Consider the applicant pool. Although we usually teach selection methods with the idea that we will have our choice of applicants, reality is more constrained. Often HR practitioners will struggle to recruit a deep and diverse applicant pool. Moreover, applicants themselves self-select into jobs and can remove themselves from the selection process for any number of reasons. In practice, we are unlikely to have the diversity and range of applicants that idealized models assume. Research focusing on the impact of applicant pool characteristics should lead to more practical considerations and recommendations for how to maximize the efficiency and utility of subsequent selection procedures. As Sackett et al., notes, not only is range restriction likely to vary by sample, but range restriction resulting from applicant pool characteristics might also vary over time, which could impact the validity and utility of selection procedures.

  7. 7. Consider other aspects of the hiring and onboarding process. Although the estimated effectiveness of our selection instruments is inherently tied to the capacity to recruit, research rarely examines the two together. Furthermore, although the goal of most selection systems is to identify applicants who are most likely to be high performers, performance is clearly impacted by events and circumstances that occur after a person has been hired. Many jobs are relatively easy to perform at an acceptable level after relatively short training programs. When this is the case, optimizing the selection system may not be the most effective use of limited HR resources. An organization’s ability to attract qualified and interested candidates, along with its ability to train and develop them as new employees, will inevitably impact the validity and utility of procedures used to screen these candidates. Research and subsequent applied recommendations should focus more on the hiring process as a whole rather than just the selection instruments that are used as part of that process.

  8. 8. Consider predictors that are widely used in practice but seldom researched. Although Sackett et al., note that data are not yet available to thoroughly evaluate many novel predictors, other tools like criminal background checks and drug tests are commonly used for many jobs but are rarely included in reviews or meta-analyses alongside other selection procedures. Another important but often overlooked factor is educational credentials, which are not only often used as an initial screen but may serve as proxies for other predictors such as cognitive abilities and personality traits. Studies that fail to account for the range restriction that this imposes are likely to see reduced estimates of predictive power (see Berry et al., Reference Berry, Gruys and Sackett2006). It seems likely to us that the reduced validity estimates found for some predictors (GMA, conscientiousness) in the updated reviews could at least be partially explained by an increasingly credentialized economy.

  9. 9. Consider how the measures themselves impact estimates. We know from prior research that various measures with the same names can often assess substantially distinct constructs (Pace & Brannick, Reference Pace and Brannick2010) and structures (Park et al., Reference Park, Wiernik, Oh, Gonzalez-Mulé, Ones and Lee2020). This can become a problem when meta-analytic estimates assume equivalence, particularly when those estimates are disproportionately based on instruments that are unsuitable to actual selection settings. Furthermore, it likely contributes to the variability in corrected validities pointed out by Sackett et al., Research should distinguish between specific measures or techniques when possible.

In short, Sackett et al. (Reference Sackett, Zhang, Berry and Lievens2022) provides a much-needed correction and update on an important issue in the field of I–O. However, we believe there is still a great deal of information that research in this area can and should attend to regarding how selection processes are actually used in organizations. Without more attention to real-world applications, most of our current research falls short of being useful for informing practice. Rather than dictating what organizations should do in a perfect world without contextual constraints or considerations, we should aim to help organizations do the best they can with what they have available.

Academics have a long history of lamenting why practitioners aren’t listening to their advice (e.g., Rogelberg et al., Reference Rogelberg, King and Alonso2022; Rynes et al., Reference Rynes, Giluk and Brown2007), but perhaps instead of pointing fingers we need to point thumbs. Future research may find itself more useful if it starts considering the manifold challenges faced by HR practitioners in real-world selection settings and works to provide a more accessible, realistic, and dynamic set of recommendations with those challenges in mind. Similarly, organizational researchers operating exclusively in the academic domain often have a tendency to cite validity estimates like those in Sackett et al. (Reference Sackett, Zhang, Berry and Lievens2022) and Schmidt and Hunter (Reference Schmidt and Hunter1998) in an indiscriminate manner and with an unquestioning reverence. Although we applaud the efforts of the Sackett et al., team to highlight some of the reasons why such claims may oversimplify circumstances in actual selection settings, we hope that the concerns raised by both ourselves and the Sackett et al., author team will give reason for caution and reflection when utilizing this information.

Footnotes

1 To be fair, Sackett et al. do acknowledge several of the issues we raise here including resource and time constraints, applicant reactions to testing, and the multifaceted nature of job performance. Part of our goal in this comment to amplify and expand on the caveats they raise and to highlight why they are so critical for decision making in actual selection settings.

References

Berry, C., Gruys, M., & Sackett, P. (2006). Educational attainment as a proxy for cognitive ability in selection: Effects on levels of cognitive ability and adverse impact. Journal of Applied Psychology, 91, 696705.CrossRefGoogle ScholarPubMed
Boddy, C., Boulter, L., & Fishwick, S. (2021). How so many toxic employees ascend to leadership. In Örtenblad, A. (Ed.), Debating bad leadership (pp. 6985). Springer.CrossRefGoogle Scholar
Hogan, J. & Holland, B. (2003). Using theory to evaluate personality and job-performance relations: A socioanalytic perspective. Journal of Applied Psychology, 88, 100112.CrossRefGoogle ScholarPubMed
Kusy, M., & Holloway, E. (2009). Toxic workplace!: Managing toxic personalities and their systems of power. John Wiley & Sons.Google Scholar
Pace, V. & Brannick, M. (2010). How similar are personality scales of the “same” construct? A meta-analytic investigation. Personality and Individual Differences, 49, 669676.CrossRefGoogle Scholar
Park, H., Wiernik, B., Oh, I.-S., Gonzalez-Mulé, E., Ones, D., & Lee, Y. (2020). Meta-analytic five-factor model personality intercorrelations: Eeny, meeny, miney, moe, how, which, why, and where to go. Journal of Applied Psychology, 105, 14901529.CrossRefGoogle ScholarPubMed
Rogelberg, S., King, E., & Alonso, A. (2022). How we can bring I-O psychology science and evidence-based practices to the public. Industrial and Organizational Psychology: Perspectives on Science and Practice, 15, 259272.CrossRefGoogle Scholar
Rynes, S., Giluk, T., & Brown, K. (2007). The very separate worlds of academic and practitioner periodicals in human resource management: Implications for evidence-based management. Academy of Management Journal, 50, 9871008.CrossRefGoogle Scholar
Sackett, P., Zhang, C., Berry, C., & Lievens, F. (2023). Revisiting the design of selection systems in light of new findings regarding the validity of widely used predictors. Industrial and Organizational Psychology: Personality on Science and Practice, 16(3), 283300.CrossRefGoogle Scholar
Sackett, P., Zhang, C., Berry, C., & Lievens, F. (2022). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, 107, 20402068.CrossRefGoogle ScholarPubMed
Schmidt, F. L. & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262274.CrossRefGoogle Scholar