Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-28T18:42:01.211Z Has data issue: false hasContentIssue false

Principles for the Validation and Use of Personnel Selection Procedures

Published online by Cambridge University Press:  28 December 2018

Extract

The Society for Industrial and Organizational Psychology (SIOP) is pleased to offer the fifth edition of the Principles for the Validation and Use of Personnel Selection Procedures, which was approved by the APA Council of Representatives in August 2018 as an authoritative guidelines document for employee selection testing and an official statement of the APA. Over a three-year period, the Principles Revision Committee updated this document from the fourth edition to be consistent with the 2014 Standards for Educational and Psychological Testing, invited commentary from SIOP and APA that informed subsequent revisions, and solicited a thorough legal review.

Type
Research Article
Copyright
Copyright © Society for Industrial and Organizational Psychology 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Ackerman, P. J., & Humphreys, L. G. (1990). Individual differences theory in industrial and organizational psychology. In Dunnette, M. D. & Hough, L. M. (Eds.), Handbook of industrial and organizational psychology (Vol. 1, pp. 233282). Palo Alto, CA: Consulting Psychologists Press.Google Scholar
Aguinis, H., Culpepper, S. A., & Pierce, C. A. (2010). Revival of test bias research in preemployment testing. Journal of Applied Psychology, 95, 648680. doi:10.1037/a0018714Google Scholar
Aguinis, H., Culpepper, S.A., & Pierce, C. A. (2016). Differential prediction generalization in college admissions testing. Journal of Educational Psychology, 108, 10451059.Google Scholar
Aguinis, H., Gottfredson, R. K., & Joo, H. (2013). Best-practice recommendations for defining, identifying, and handling outliers. Organizational Research Methods, 16, 270301.Google Scholar
Aguinis, H., Petersen, S. A., & Pierce, C. A. (1999). Appraisal of the homogeneity of error variance assumption and alternatives to multiple regression for estimating moderating effects of categorical variables. Organizational Research Methods, 2, 315339.Google Scholar
Aguinis, H., & Pierce, C. A. (1998). Testing moderator variable hypotheses meta-analytically. Journal of Management, 24, 577592.Google Scholar
Aguinis, H., Sturman, M. C., & Pierce, C. A. (2008). Comparison of three meta-analytic procedures for estimating moderating effects of categorical variables. Organizational Research Methods, 11, 934.Google Scholar
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.Google Scholar
American Psychological Association. (2017a). Ethical principles of psychologists and code of conduct (2002, Amended June 1, 2010 and January 1, 2017). Retrieved from http://www.apa.org/ethics/code/index.aspx.Google Scholar
American Psychological Association. (2017b). Multicultural guidelines: An ecological approach to context, identity, and intersectionality. Retrieved from http://www.apa.org/about/policy/multicultural-guidelines.pdf.Google Scholar
American Psychological Association. (2018). Professional practice guidelines for occupationally mandated psychological evaluations. American Psychologist, 73, 186197.Google Scholar
Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73, 325.Google Scholar
Arneson, J. J., Sackett, P. R., & Beatty, A. S. (2011). Ability-performance relationships in education and employment settings: Critical tests of the more-is-better and the good-enough hypotheses. Psychological Science, 22, 13361342.Google Scholar
Arthur, W. Jr., & Day, E. A., McNelly, T. L., & Edens, P. S. (2003). A meta-analysis of the criterion-related validity of assessment center dimensions. Personnel Psychology, 56, 125153.Google Scholar
Arthur, W. Jr., & Villado, A. J. (2008). The importance of distinguishing between constructs and methods when comparing predictors in personnel selection research and practice. Journal of Applied Psychology, 93, 435442.Google Scholar
Aytug, Z. G., Rothstein, H. R., Zhou, W., & Kern, M. C. (2012). Revealed or concealed? Transparency of procedures, decisions, and judgment calls in meta-analyses. Organizational Research Methods, 15, 103133.Google Scholar
Barrett, G. V., Phillips, J. S., & Alexander, R. A. (1981). Concurrent and predictive validity designs: A critical reanalysis. Journal of Applied Psychology, 66, 16.Google Scholar
Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44, 126.Google Scholar
Beatty, A. S., Barratt, C. L., Berry, C. M., & Sackett, P. R. (2014). Testing the generalizability of indirect range restriction corrections. Journal of Applied Psychology, 99, 587598.Google Scholar
Bemis, S. E. (1968). Occupational validity of the General Aptitude Test Battery. Journal of Applied Psychology, 52, 240249.Google Scholar
Berry, C. M., Clark, M. A., & McClure, T. K. (2011). Racial/ethnic differences in the criterion-related validity of cognitive ability tests: A qualitative and quantitative review. Journal of Applied Psychology, 96, 881906.Google Scholar
Berry, C. M., Ones, D. S., & Sackett, P. R. (2007). Interpersonal deviance, organizational deviance, and their common correlates: A review and meta-analysis. Journal of Applied Psychology, 92, 410424.Google Scholar
Berry, C. M., & Zhao, P. (2015). Addressing criticisms of existing predictive bias research: Cognitive ability test scores still overpredict African Americans’ job performance. Journal of Applied Psychology, 100, 162179.Google Scholar
Bliese, P. D., & Hanges, P. J. (2004). Being both too liberal and too conservative: The perils of treating grouped data as though they were independent. Organizational Research Methods, 7, 400417.Google Scholar
Bobko, P. (1983). An analysis of correlations corrected for attenuation and range restriction. Journal of Applied Psychology, 68, 584589.Google Scholar
Bobko, P., Roth, P. L., & Buster, M. A. (2007). The usefulness of unit weights in creating composite scores: A literature review, application to content validity, and meta-analysis. Organizational Research Methods, 10, 689709.Google Scholar
Bobko, P., & Stone-Romero, E. (1998). Meta-analysis is another useful research tool but it is not a panacea. In Ferris, G. (Ed.), Research in personnel and human resources management (Vol. 16, pp. 359397). Greenwich, CT: JAI Press.Google Scholar
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. West Sussex, UK: Wiley.Google Scholar
Brennan, R. L. (1998). Raw-score conditional standard errors of measurement in generalizability theory. Applied Psychological Measurement, 22, 307331.Google Scholar
Brogden, H. E. (1949). When testing pays off. Personnel Psychology, 2, 171183.Google Scholar
Buster, M. A., Roth, P. L., & Bobko, P. (2005). A process for content validation of education and experience-based minimum qualifications: An approach resulting in federal court approval. Personnel Psychology, 58, 771799.Google Scholar
Callender, J. C., & Osburn, H. G. (1980). Development and testing of a new model of validity generalization. Journal of Applied Psychology, 65, 543558.Google Scholar
Callender, J. C., & Osburn, H. G. (1981). Testing the constancy of validity with computer-generated sampling distributions of the multiplicative model variance method estimate: Results for petroleum industry validation research. Journal of Applied Psychology, 66, 274281.Google Scholar
Campion, M. A., Fink, A. A., Ruggeberg, B. J., Carr, L., Phillips, G. M., & Odman, R. B. (2011). Doing competencies well: Best practices in competency modeling. Personnel Psychology, 64, 225262.Google Scholar
Campion, M. A., Outtz, J. L., Zedeck, S., Schmidt, F. L., Kehoe, J. F., Murphy, K. R., & Guion, R. M. (2001). The controversy over score banding in personnel selection: Answers to 10 key questions. Personnel Psychology, 54, 149185.Google Scholar
Carter, N. T., Dalal, D. K., Boyce, A. S., O'Connell, M. S., Kung, M.-C., & Delgado, K. (2014). Uncovering curvilinear relationships between conscientiousness and job performance: How theoretically appropriate measurement makes an empirical difference. Journal of Applied Psychology, 99, 564586.Google Scholar
Cascio, W. F. (2000). Costing human resources: The financial impact of behavior in organizations (4th ed.). Cincinnati, OH: Southwestern.Google Scholar
Cascio, W. F., Outtz, J., Zedeck, S., & Goldstein, I. L. (1991). Statistical implications of six methods of test score use in personnel selection. Human Performance, 4, 233264.Google Scholar
Cheung, M. W. L. (2015). Meta-analysis: A structural equation modeling approach. Chichester, UK: Wiley.Google Scholar
Chiaburu, D. S., Oh, I.-S., Berry, C. M., Li, N., & Gardener, R. G. (2011). The five factor model of personality traits and organizational citizenship behaviors: A meta-analysis. Journal of Applied Psychology, 96, 11401166.Google Scholar
Christian, M. S., Edwards, B. D., & Bradley, J. C. (2010). Situational judgment tests: Constructs assessed and a meta-analysis of their criterion related validities. Personnel Psychology, 63, 83117.Google Scholar
Church, A. H., & Rotolo, C. T. (2011). How are top companies assessing their high-potentials and senior executives? A talent management benchmark study. Consulting Psychology Journal: Practice and Research, 65, 199223.Google Scholar
Cleary, T. A. (1968). Test bias: Prediction of grades of Negro and white students in integrated colleges. Journal of Educational Measurement, 5, 115124.Google Scholar
Converse, P. D., & Oswald, F. L. (2014). Thinking ahead: Assuming linear versus nonlinear personality-criterion relationships in personnel selection. Human Performance, 27, 6179.Google Scholar
Coward, W. M., & Sackett, P. R. (1990). Linearity of ability-performance relationships: A reconfirmation. Journal of Applied Psychology, 75, 297300.Google Scholar
Cronbach, L. J., & Gleser, G. C. (1965). Psychological tests and personnel decisions (2nd ed.) Urbana, IL: University of Illinois Press.Google Scholar
Dahlke, J.A., and Sackett, P. R. (2017). Refinements to the dMod class of categorical-moderation effect sizes. Organizational Research Methods, 21, 226234.Google Scholar
DeShon, R. P., & Alexander, R. A. (1996). Alternative procedures for testing regression slope homogeneity when group error variances are unequal. Psychological Methods, 1, 261277.Google Scholar
Enders, C. K. (2010). Applied missing data analysis. New York, NY: Guilford Press.Google Scholar
Fife, D. A., Mendoza, J. L., & Terry, R. (2013). Revisiting Case IV: A reassessment of bias and standard errors of Case IV under range restriction. British Journal of Mathematical and Statistical Psychology, 66, 521542.Google Scholar
Finch, D. M., Edwards, B. D., & Wallace, J. C. (2009). Multistage selection strategies: Simulating the effects on adverse impact and expected performance for various predictor combinations. Journal of Applied Psychology, 94, 318340.Google Scholar
Goldstein, I. L., Zedeck, S., & Schneider, B. (1993). An exploration of the job-analysis-content validity process. In Schmitt, N. & Borman, W. (Eds.), Personnel selection in organizations (pp. 334). San Francisco, CA: Jossey-Bass.Google Scholar
Golubovich, J., Grand, J. A., Ryan, A. M., & Schmitt, N. (2014). An examination of common sensitivity review practices in test development. International Journal of Selection and Assessment, 22, 111.Google Scholar
Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549576.Google Scholar
Guion, R. M. (2011). Assessment, measurement, and prediction for personnel decisions (2nd ed.). Mahwah, NJ: Erlbaum.Google Scholar
Haertel, E. H. (2006). Reliability. In Brennan, R. L. (Ed.), Educational measurement (4th ed., pp. 65110). Westport, CT: American Council of Education and Praeger Publishers.Google Scholar
Hartigan, J. A., & Wigdor, A. K. (Eds.). (1989). Fairness in employment testing. Washington, DC: National Academy Press.Google Scholar
Hoffman, C. C., & McPhail, S. M. (1998). Exploring options for supporting test use in situations precluding local validation. Personnel Psychology, 51, 9871003.Google Scholar
Hoffman, C. C., Rashovsky, B., & D'Egidio, E. (2007). Job component validity: Background, current research, and applications. In McPhail, S. M. (Ed.), Alternative validation strategies: Developing new and leveraging existing validity evidence (pp. 82121). San Francisco, CA: Wiley.Google Scholar
Hough, L. M., Oswald, F. L., & Ployhart, R. E. (2001). Determinants, detection, and amelioration of adverse impact in personnel selection procedures: Issues, evidence, and lessons learned. International Journal of Selection and Assessment, 9, 142.Google Scholar
Huang, J. L., Curran, P. G., Keeney, J., Poposki, E. M., & DeShon, R.P. (2012). Detecting and deterring insufficient effort respond to surveys. Journal of Business and Psychology, 27, 99114.Google Scholar
Huffcutt, A. I., Conway, J. M., Roth, P. L., & Stone, N. J. (2001). Identification and meta-analytic assessment of psychological constructs measured in employment interviews. Journal of Applied Psychology, 86, 897913.Google Scholar
Humphreys, L. G. (1952). Individual differences. Annual Review of Psychology, 3, 131150.Google Scholar
Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 7288.Google Scholar
Hunter, J. E., & Schmidt, F. L. (1996). Measurement error in psychological research: Lessons from 26 research scenarios. Psychological Methods, 1, 199223.Google Scholar
Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 2842.Google Scholar
Hunter, J. E., Schmidt, F. L., & Rauschenberger, J. (1984). Methodological and statistical issues in the study of bias in mental testing. In Reynolds, C. R. & Brown, R. T. (Eds.), Perspectives on bias in mental testing (pp. 4199). New York, NY: Plenum.Google Scholar
Hurtz, G. M., & Donovan, J. J. (2000). Personality and job performance: The Big Five revisited. Journal of Applied Psychology, 85, 869879.Google Scholar
International Taskforce on Assessment Center Guidelines. (2015). Guidelines and ethical considerations for assessment center operations. Journal of Management, 41, 12441273.Google Scholar
International Test Commission. (2006). International guidelines on computer-based testing and internet-delivered testing. International Journal of Testing, 6, 143172.Google Scholar
International Test Commission. (2013). ITC guidelines on test use. Retrieved from https://www.intestcom.org/files/guideline_test_use.pdf.Google Scholar
Jawahar, I. M., & Williams, C. R. (1997). Where all the children are above average: The performance appraisal purpose effect. Personnel Psychology, 50, 905925.Google Scholar
Jeanneret, R., & Silzer, R. (Eds.). (1998). Individual psychological assessment. San Francisco, CA: Jossey-Bass.Google Scholar
Johnson, J. W. (2007). Synthetic validity: A technique of use (finally). In McPhail, S. M. (Ed.), Alternative validation strategies: Developing new and leveraging existing validity evidence (pp. 122158). San Francisco, CA: Wiley.Google Scholar
Johnson, J. W., & Carter, G. (2010). Validating synthetic validation: Comparing traditional and synthetic validity coefficients. Personnel Psychology, 63, 755795.Google Scholar
Johnson, J. W., Steel, P., Scherbaum, C. A., Hoffman, C. C., Jeanneret, P. R., & Foster, J. (2010). Validation is like motor oil: Synthetic is better. Industrial and Organizational Psychology, 3, 305328.Google Scholar
Judge, T. A., Rodell, J. B., Klinger, R. L., Simon, L. S., & Crawford, E.R. (2013). Hierarchical representations of the five-factor model of personality in predicting job performance: Integrating three organizing frameworks with two theoretical perspectives. Journal of Applied Psychology, 98, 875925.Google Scholar
Keiser, H. N., Sackett, P. R., Kuncel, N. R., & Brothen, T. (2016). Why women perform better in college than admission scores would predict: Exploring the roles of conscientiousness and course-taking patterns. Journal of Applied Psychology, 101, 569581.Google Scholar
Koch, A. J., D'Mello, S. D., & Sackett, P. R. (2015). A meta-analysis of gender stereotypes and bias in experimental simulations of employment decision making. Journal of Applied Psychology, 100, 128161.Google Scholar
Kwaske, I. H. (2008). Individual assessments for personnel selection: An update on a rarely researched but avidly practiced practice. Consulting Psychology Journal: Practice and Research, 56, 186195.Google Scholar
LaHuis, D. M., & Avis, J. M. (2007). Using multilevel random coefficient modeling to investigate rater effects in performance ratings. Organizational Research Methods, 10, 97107. doi:10.1177/1094428106289394Google Scholar
Levine, J. D., & Oswald, F. L. (2012). O*NET: The Occupational Information Network. In Wilson, M. A., Bennett, W. Jr., Gibson, S. G., & Alliger, G. M. (Eds.), The handbook of work analysis in organizations: Methods, systems, applications, and science of work measurement in organizations (pp. 281301). New York, NY: Routledge/Psychology Press.Google Scholar
Li, J. C-H., Chan, W., Cui, Y. (2011). Bootstrap standard error and confidence intervals for the correlations corrected for indirect range restriction. British Journal of Mathematical and Statistical Psychology, 64, 367387.Google Scholar
Little, R J. A., & Rubin, D. B., (2002). Statistical analysis with missing data (2nd ed.). New York, NY: Wiley.Google Scholar
Mattern, K. D., & Patterson, B. F. (2013). Test of slope and intercept bias in college admissions: A response to Aguinis, Culpepper, and Pierce (2010). Journal of Applied Psychology, 98, 134147. doi:10.1037/a0030610Google Scholar
McDaniel, M. A. (2007). Validity generalization as a test validation process. In McPhail, S. M. (Ed.), Alternative validation strategies: Developing new and leveraging existing validity evidence (pp. 159180). San Francisco, CA: Wiley.Google Scholar
Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114, 449458.Google Scholar
Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437455.Google Scholar
Morris, S. B., Daisley, R. L., Wheeler, M., & Boyer, P. (2015). A meta-analysis of the relationship between individual assessments and job performance. Journal of Applied Psychology, 100, 520.Google Scholar
Mueller, L., Norris, D., & Oppler, S. (2007). Implementation based on alternate validation procedures: Ranking, cuts scores, banding, and compensatory models. In McPhail, S. M. (Ed.), Alternative validation strategies: Developing new and leveraging existing validity evidence (pp. 349405). San Francisco, CA: Jossey-Bass.Google Scholar
Murphy, K. R. (2009). Content validation is useful for many things, but validity isn't one of them. Industrial and Organizational Psychology, 2 (4), 453464.Google Scholar
Murphy, K. R., & DeShon, R. (2000). Interrater correlations do not estimate the reliability of job performance ratings. Personnel Psychology, 53, 873900.Google Scholar
Myors, B., Lievens, F., Schollaert, E., Van Hoye, G., Cronshaw, S. F., Mladinic, A., & Sackett, P. R. (2008). International perspectives on the legal environment for selection. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 206246.Google Scholar
Naylor, J. C., & Shine, L. C. (1965). A table for determining the increase in mean criterion score obtained by using a selection device. Journal of Industrial Psychology, 3, 3342.Google Scholar
Newman, D. A. (2014). Missing data: Five practical guidelines. Organizational Research Methods, 17, 372411.Google Scholar
Newman, D. A., Jacobs, R. R., & Bartram, D. (2007). Choosing the best method for local validity estimation: Relative accuracy of meta-analysis versus a local study versus Bayes-analysis. Journal of Applied Psychology, 92, 13941413.Google Scholar
Nieminen, L. R. G., Nicklin, J. M., McClure, T. K., & Chakrabarti, M. (2011). Meta-analytic decisions and reliability: A serendipitous case of three independent telecommuting meta-analyses. Journal of Business and Psychology, 26, 105121.Google Scholar
Nye, C. D., and Sackett, P. R. (2017). New effect sizes for tests of categorical moderation and differential prediction. Organizational Research Methods, 20, 639664.Google Scholar
Orr, J. M., Sackett, P. R., and DuBois, C. L. Z. (1991). Outlier detection and treatment in I/O psychology: A survey of researcher beliefs and an empirical illustration. Personnel Psychology, 44, 473486.Google Scholar
Oswald, F. L., & Johnson, J. W. (1998). On the robustness, bias, and stability of statistics from meta-analysis of correlation coefficients: Some initial Monte Carlo findings. Journal of Applied Psychology, 83, 164178.Google Scholar
Oswald, F. L., Putka, D. J., & Ock, J. (2015). Weight a minute, what you see in a weighted composite is probably not what you get. In Lance, C. E. & Vandenberg, R. J. (Eds.), More statistical and methodological myths and urban legends (pp. 187205). New York, NY: Taylor & Francis.Google Scholar
Oswald, F., Saad, S., & Sackett, P. R. (2000). The homogeneity assumption in differential prediction analysis: Does it really matter? Journal of Applied Psychology, 85, 536541.Google Scholar
Pearlman, K., Schmidt, F. L., & Hunter, J. E. (1980). Validity generalization results for tests used to predict job proficiency and training success in clerical occupations. Journal of Applied Psychology, 65, 373406.Google Scholar
Petersen, N. S., & Novick, M. R. (1976). An evaluation of some models for culture-fair selection. Journal of Educational Measurement, 13, 329.Google Scholar
Peterson, N. G., Mumford, M. D., Borman, W. C., Jeanneret, P. R., & Fleishman, E. A. (Eds.). (1999). An occupational information system for the 21st century: The development of O*NET. Washington, DC: American Psychological Association.Google Scholar
Putka, D. J., & Hoffman, B. J. (2014). The reliability of job performance ratings equals 0.52. In Lance, C. E. & Vandenberg, R. J. (Eds.), More statistical and methodological myths and urban legends (pp. 247275). New York, NY: Taylor & Francis.Google Scholar
Putka, D. J., Hoffman, B. J., & Carter, N. T. (2014) Correcting the correction: When individual raters offer distinct but valid perspectives. Industrial and Organizational Psychology: Perspectives on Science and Practice, 7, 543548.Google Scholar
Putka, D. J., & Sackett, P. R. (2010). Reliability and validity. In Farr, J. L. & Tippins, N. T. (Eds.), Handbook of employee selection (pp. 949). New York, NY: Routledge.Google Scholar
Qualls-Payne, A. L. (1992). A comparison of score level estimates of the standard error of measurement. Journal of Educational Measurement, 29, 213255.Google Scholar
Quillian, L., Pager, D., Hexel, O., & Midtbøen, A. H. (2017). Meta-analysis of field experiments shows no change in racial discrimination in hiring over time. Proceedings of the National Academy of Sciences, 114, 1087010875.Google Scholar
Raju, N. S., Anselmi, T. V., Goodman, J. S., & Thomas, A. (1998). The effect of correlated artifacts and true validity on the accuracy of parameter estimation in validity generalization. Personnel Psychology, 51, 453465.Google Scholar
Raju, N. S., & Brand, P. A. (2003). Determining the significance of correlations corrected for unreliability and range restriction. Applied Psychological Measurement, 27, 5271.Google Scholar
Raju, N. S., Burke, M. J., & Normand, J. (1990). A new approach for utility analysis. Journal of Applied Psychology, 75, 312.Google Scholar
Raju, N. S., Burke, M. J., Normand, J., & Langlois, G. M. (1991). A new meta-analytic approach. Journal of Applied Psychology, 76, 432446.Google Scholar
Raju, N. S., Pappas, S., & Williams, C. P. (1989). An empirical Monte Carlo test of the accuracy of the correlation, covariance, and regression slope models for assessing validity generalization. Journal of Applied Psychology, 74, 901911.Google Scholar
Raju, N. S., Price, L. R., Oshima, T. C., & Nering, M. L. (2007). Standardized conditional SEM: A case for conditional reliability. Applied Psychological Measurement, 31, 169180.Google Scholar
Roth, P. L., Purvis, K. L., & Bobko, P. (2012). A meta-analysis of gender group differences for measures of job performance in field studies. Journal of Management, 38, 719739.Google Scholar
Ryan, A. M., & Sackett, P. R. (1998). Individual assessment: The research base. In Jeanneret, R. & Silzer, R. (Eds.), Individual psychological assessment (pp. 5487). San Francisco, CA: Jossey-Bass.Google Scholar
Saad, S., & Sackett, P. R. (2002). Examining differential prediction by gender in employment-oriented personality measures. Journal of Applied Psychology, 87, 667674.Google Scholar
Sackett, P. R. (Ed.). (2009a). Industrial and Organizational Psychology, 2 (1).Google Scholar
Sackett, P. R. (Ed.). (2009b). Industrial and Organizational Psychology, 2 (4).Google Scholar
Sackett, P. R., Laczo, R. M., & Lippe, Z. P. (2003). Differential prediction and the use of multiple predictors: The omitted variables problem. Journal of Applied Psychology, 88, 10461056.Google Scholar
Sackett, P. R., Lievens, F., Berry, C. M., & Landers, R. N. (2007). A cautionary note on the effects of range restriction on predictor intercorrelations. Journal of Applied Psychology, 92, 538544.Google Scholar
Sackett, P. R., Putka, D. J., & McCloy, R. A. (2012). The concept of validity and the process of validation. In Schmitt, N. (Ed.) The Oxford handbook of personnel assessment and selection (pp. 91118). New York, NY: Oxford University Press.Google Scholar
Sackett, P. R., & Roth, L. (1996). Multi-stage selection strategies: a Monte Carlo investigation of effects on performance and minority hiring. Personnel Psychology, 49, 549572.Google Scholar
Sackett, P. R., Schmitt, N., Ellingson, J. E., & Kabin, M. B. (2001). High stakes testing in employment, credentialing, and higher education: Prospects in a post-affirmative action world. American Psychologist, 56, 302318.Google Scholar
Sackett, P. R., & Walmsley, P. T. (2014). Which personality attributes are most important in the workplace? Perspectives on Psychological Science, 9, 538551.Google Scholar
Sackett, P. R., & Yang, H. (2000). Correction for range restriction: An expanded typology. Journal of Applied Psychology, 85, 112118.Google Scholar
Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1, 115129.Google Scholar
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262274.Google Scholar
Schmidt, F. L., & Hunter, J. E., (2015). Methods of meta-analysis: Correcting error and bias in research findings. Thousand Oaks, CA: Sage.Google Scholar
Schmidt, F. L., Hunter, J. E., McKenzie, R. C., & Muldrow, T. W. (1979). Impact of valid selection procedures on work-force productivity. Journal of Applied Psychology, 64, 609626.Google Scholar
Schmidt, F. L., Hunter, J. E., & Pearlman, K. (1981). Task differences as moderators of aptitude test validity in selection: A red herring. Journal of Applied Psychology, 66,166185.Google Scholar
Schmidt, F. L., Oh, I.-S., Le, H. (2006). Increasing the accuracy of corrections for range restriction: Implications for selection procedure validities and other research results. Personnel Psychology, 59, 281305.Google Scholar
Schmidt, F. L., Pearlman, K., & Hunter, J. E., (1980). The validity and fairness of employment and educational tests for Hispanic Americans: A review and analysis. Personnel Psychology, 33, 705724.Google Scholar
Schmidt, F. L., Viswesvaran, C., & Ones, D. S. (2000). Reliability is not validity and validity is not reliability. Personnel Psychology, 53, 901912.Google Scholar
Schmitt, N., & Ployhart, R. E. (1999). Estimates of cross-validity for stepwise regression and with predictor selection. Journal of Applied Psychology, 84, 5057.Google Scholar
Schneider, B., & Konz, A. (1989). Strategic job analysis. Human Resources Management, 28, 5163.Google Scholar
Silzer, R., & Jeanneret, R., (2011). Individual psychological assessment: A practice and science in search of common ground. Industrial and Organizational Psychology: Perspectives on Science and Practice, 4, 270296.Google Scholar
Society for Industrial and Organizational Psychology. (1987). Principles for the validation and use of personnel selection procedures. (3rd ed.). College Park, MD: Author.Google Scholar
Society for Industrial and Organizational Psychology. (2003). Principles for the validation and use of personnel selection procedures (4th ed.). Bowling Green, OH: Author.Google Scholar
Steel, P. D. G., Huffcutt, A. I., & Kammeyer-Mueller, J. (2006). From the work, one knows the worker: A systematic review of the challenges, solutions, and steps to creating synthetic validity. International Journal of Selection and Assessment, 14, 1636.Google Scholar
Steel, P. D., & Kammeyer-Mueller, J. D. (2002). Comparing meta-analytic moderator estimation techniques under realistic conditions. Journal of Applied Psychology, 87, 96111.Google Scholar
Taylor, H. C., & Russell, J. T. (1939). The relationship of validity coefficients to the practical effectiveness of tests in selection: Discussion and tables. Journal of Applied Psychology, 23, 565578.Google Scholar
Tipton, E., & Pustejovsky, J. E. (2015). Small-sample adjustment for tests of moderators and model fit using robust variance estimation in meta-regression. Journal of Educational and Behavioral Statistics, 40, 604634.Google Scholar
Weiner, J. & Hurtz, G. (2017). A comparative study of online remote proctored versus onsite proctored high-stakes exams. Journal of Applied Testing Technology, 18, 1320.Google Scholar
Wilkinson, L., & Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594604.Google Scholar
Van Iddekinge, C. H., & Ployhart, R. E. (2008). Developments in the criterion-related validation of selection procedures: a critical review and recommendations for practice. Personnel Psychology, 61, 871925.Google Scholar
Van Iddekinge, C. H., Roth, P. L., Raymark, P. H., & Odle-Dusseau, H. N. (2012). The critical role of the research question, inclusion criteria, and transparency in meta-analyses of integrity test research: A reply to Harris et al. (2012) and Ones, Viswesvaran, and Schmidt (2012). Journal of Applied Psychology, 97, 543549.Google Scholar
Ziegler, M., MacCann, C., & Roberts, R. D. (2012). New perspectives on faking in personality assessment. New York, NY: Oxford University Press.Google Scholar