Chamorro-Premuzic, Winsborough, Sherman, and Hogan (Reference Chamorro-Premuzic, Winsborough, Sherman and Hogan2016) note that new talent signals recently adopted by organizations are related to older selection and assessment methods. Drawing this connection between old and new technologies is helpful; however, viewing new technology as either shiny new objects or a brave new world creates a false dichotomy. Recent technology-enhanced human resources (HR) processes like the widespread use of gamified practices and video-recorded interviewing are not just fads or the beginning of a transformation in HR but rather natural evolutions of methods that differ across specific dimensions that can be identified and measured. It is important to view these recent advances as extensions of the existing methods. That is, we need to focus on how these new methods are different and not on that they are different.
Understanding how these methods for identifying talent differ is important because they have measurable psychological effects on users. For example, we can compare video-recorded interviews with in-person interviews or synchronous online interviews, but such comparisons need to be articulated in terms of specific differences between the administration media. In this reply, we provide a path forward for research on new selection tools and advise caution to not forego good scientific practice when utilizing these tools.
A Path Forward
In order to better understand how technological advances can affect organizational processes like personnel selection, we need to develop a better understanding of technology itself. Many of industrial–organizational (I-O) psychology's past and current discussions on technology have focused on assessing how particular hardware or software innovations change current practice. A number of studies have been conducted in which two technologies are pitted against each other to draw conclusions about their usage. Although these studies have each added value to their respective fields of study, they do not necessarily help us to determine how the next major technological innovation will affect organizations. As technology moves at an ever increasing rate, we need to move beyond examining individual technologies and start building frameworks grounded in psychological theory to understand the ways technologies can vary from one another.
This call for the development of technological taxonomies and frameworks is not new. A number of researchers have developed frameworks to describe ways that technology can affect communication (Daft, Lengel, & Ven, Reference Daft, Lengel and Ven1986; Maruping & Agarwal, Reference Maruping and Agarwal2004) and assessments in personnel selection (Leeson, Reference Leeson2006; Potosky, Reference Potosky2008). For example, Potosky's (Reference Potosky2008) framework suggests that assessments administered through a medium are affected by the medium's transparency, social bandwidth, interactivity, and surveillance. Propositions are also supplied for how each of these characteristics should affect assessment. These frameworks have existed for several years; however, they are seldom empirically tested in I-O psychology research. Without empirical testing, these frameworks remain in a state of limbo; that is, they aim to help us understand technology, but their lack of empirical support means that we cannot be sure of their propositions. These frameworks can be of immense use, but they need to first be tested. We currently lack the evidence to say whether extant taxonomies or frameworks are sufficient or whether they require expansions or modifications. Moreover, the lack of integration of these frameworks in the I-O psychology literature helps to reinforce the common approach of making individual comparisons of technology and keeps us further away from developing an understanding of future innovations. By testing and refining technological frameworks, we can become closer as a field to being able to make informed predictions as to how new forms of technology will affect organizations, without the need for comparing the “old” with the “new” methods.
Chamorro-Premuzic et al. use video-recorded responses to structured interviews as an example of a new version of traditional assessment methods. Although they are correct that this is a new technology, it is not wholly unlike extant interview methods. Rather than examining video-recorded interviews and synchronous online interviewing as two separate phenomena, we should examine how they differ. That is, we can apply frameworks like Potosky's to make specific predictions as to how they might differ. This will allow us to understand the next great innovation in interviewing, whatever that may be. Better yet, we will be in a position to offer advice as to what that next step should be.
Chamorro-Premuzic et al. also address gamification, which, like video-recorded interviewing, is best thought of as an extension of older selection methods. Organizations are drawn to gamified selection tools by a belief that these tools will be more engaging for candidates than traditional selection tools. While much research has been devoted to understanding gamified training and subsequent learning outcomes (Landers & Armstrong, Reference Landers and Armstrong2014; Landers & Callan, Reference Landers, Callan, Ma, Oikonomou and Jain2011; Orvis, Horn, & Belanich, Reference Orvis, Horn and Belanich2008; Wilson et al., Reference Wilson, Bedwell, Lazzara, Salas, Burke, Estock and Conkey2008), virtually no research has focused on gamified selection. The dearth of published research in this area should be viewed as a major hindrance to our ability to fully utilize these tools and be confident in decisions made using them. All is not lost, however; I-O psychologists are experts in understanding motivation and how goals are affected by feedback. We can draw from what we know about traditional selection tools and motivation and apply it to these new approaches to develop a full understanding of how gamified selection tools might affect candidates in different ways.
In gamified selection, there are aspects of the technology that affect candidates in unintended but ultimately scientifically predictable ways. Cognitive load theory suggests that tasks performed in unfamiliar domains are more likely to tax working memory (Kalyuga, Chandler, & Sweller, Reference Kalyuga, Chandler and Sweller2001). That is, when operating in unfamiliar domains, candidates are likely to experience inhibited performance due to increases in extraneous cognitive load (Chandler & Sweller, Reference Chandler and Sweller1991). In line with cognitive load theory, Orvis et al. (Reference Orvis, Horn and Belanich2008) found prior videogame experience resulted in higher motivation and better performance in instructional videogames. Candidates with less videogame experience are likely to be at a disadvantage, but research is needed to investigate these issues in this new context. Recent research on video interviews and technology-mediated communication has further shown how new selection methods might differ from old ones as a product of the unique aspects of the technology that are used. Horn and Behrend (Reference Horn and Behrend2016) found that the presence of the picture-in-picture window, which allows users to see themselves during a video (Skype) interview resulted in higher cognitive load than when the window was not present. The picture-in-picture window acts as an attentional distractor, which increases cognitive load on complex tasks (Baron, Moore, & Sanders, Reference Baron, Moore and Sanders1978). Further, communication through video has been shown to elicit higher levels of cognitive load than audio (Hinds, Reference Hinds1999) or face to face (Ferran & Watts, Reference Ferran and Watts2008). These findings elucidate the need to better understand the impact of particular aspects of selection tools.
Context and Scenery
As we navigate these new roads to selection decisions we must not forget that changes in scenery affect applicants. The convenience of technology often results in organizations becoming lax when it comes to the degree of structure implemented in assessments. As technology provides greater levels of autonomy to candidates, there are more opportunities to introduce error into assessment scores. Gamified tests and video interviews allow users to choose the environment in which they perform. When choosing where to take an assessment one candidate might choose to lounge in a cool quiet room, whereas another candidate who does not have this luxury may instead opt for the precarious stool in the crowded coffee shop down the street. These choices about context are not without consequences, as we have learned countless times from varying angles over the years, such as in research on situational strength (e.g., Meyer, Dalal, & Bonaccio, Reference Meyer, Dalal and Bonaccio2012) and trait activation (e.g., Tett & Burnett, Reference Tett and Burnett2003). Each context could introduce idiosyncratic elements that affect performance in positive or negative ways. Temperature (Halali, Meiran, & Shalev, Reference Halali, Meiran and Shalev2016) and office design (Seddigh et al., Reference Seddigh, Stenfors, Berntsson, Bååth, Sikström and Westerlund2015) are a few examples of contextual factors shown to affect performance on demanding cognitive tasks. Researchers are trained to take environment into account when designing experimental studies, but unfortunately this same care is not always taken in practice. If we are to be confident in decisions made using new tools, we must remember to account for the consequences of the autonomy that comes with them.
Conclusion
Chamorro-Premuzic et al. have successfully given a brief overview of new selection tools. Our mission was to provide a path forward and a way of thinking when researching these tools. If we continue to look at each new technology as a wholly new phenomenon, we will be repeatedly asking the same question—How does new x compare to old y?—without really learning anything in the process. If we instead understand exactly how technologically enhanced selection methods differ from one another, we can then learn more about how the next innovation will affect organizational processes and in turn make more informed decisions. This will allow us to generate an understanding of technology that is more stable and that will be more robust to future technology changes. Finally, we acknowledge that it is impossible for research to keep perfect pace with practice, but we should aim to make that gap as narrow as possible. By improving our understanding of the ways that technologies can affect users, researchers may be more able to keep up with their fellow practitioners in terms of studying and using new innovations. Similarly, practitioners can use this enhanced understanding to take additional steps to control for contextual factors during assessment delivery.
Chamorro-Premuzic, Winsborough, Sherman, and Hogan (Reference Chamorro-Premuzic, Winsborough, Sherman and Hogan2016) note that new talent signals recently adopted by organizations are related to older selection and assessment methods. Drawing this connection between old and new technologies is helpful; however, viewing new technology as either shiny new objects or a brave new world creates a false dichotomy. Recent technology-enhanced human resources (HR) processes like the widespread use of gamified practices and video-recorded interviewing are not just fads or the beginning of a transformation in HR but rather natural evolutions of methods that differ across specific dimensions that can be identified and measured. It is important to view these recent advances as extensions of the existing methods. That is, we need to focus on how these new methods are different and not on that they are different.
Understanding how these methods for identifying talent differ is important because they have measurable psychological effects on users. For example, we can compare video-recorded interviews with in-person interviews or synchronous online interviews, but such comparisons need to be articulated in terms of specific differences between the administration media. In this reply, we provide a path forward for research on new selection tools and advise caution to not forego good scientific practice when utilizing these tools.
A Path Forward
In order to better understand how technological advances can affect organizational processes like personnel selection, we need to develop a better understanding of technology itself. Many of industrial–organizational (I-O) psychology's past and current discussions on technology have focused on assessing how particular hardware or software innovations change current practice. A number of studies have been conducted in which two technologies are pitted against each other to draw conclusions about their usage. Although these studies have each added value to their respective fields of study, they do not necessarily help us to determine how the next major technological innovation will affect organizations. As technology moves at an ever increasing rate, we need to move beyond examining individual technologies and start building frameworks grounded in psychological theory to understand the ways technologies can vary from one another.
This call for the development of technological taxonomies and frameworks is not new. A number of researchers have developed frameworks to describe ways that technology can affect communication (Daft, Lengel, & Ven, Reference Daft, Lengel and Ven1986; Maruping & Agarwal, Reference Maruping and Agarwal2004) and assessments in personnel selection (Leeson, Reference Leeson2006; Potosky, Reference Potosky2008). For example, Potosky's (Reference Potosky2008) framework suggests that assessments administered through a medium are affected by the medium's transparency, social bandwidth, interactivity, and surveillance. Propositions are also supplied for how each of these characteristics should affect assessment. These frameworks have existed for several years; however, they are seldom empirically tested in I-O psychology research. Without empirical testing, these frameworks remain in a state of limbo; that is, they aim to help us understand technology, but their lack of empirical support means that we cannot be sure of their propositions. These frameworks can be of immense use, but they need to first be tested. We currently lack the evidence to say whether extant taxonomies or frameworks are sufficient or whether they require expansions or modifications. Moreover, the lack of integration of these frameworks in the I-O psychology literature helps to reinforce the common approach of making individual comparisons of technology and keeps us further away from developing an understanding of future innovations. By testing and refining technological frameworks, we can become closer as a field to being able to make informed predictions as to how new forms of technology will affect organizations, without the need for comparing the “old” with the “new” methods.
Chamorro-Premuzic et al. use video-recorded responses to structured interviews as an example of a new version of traditional assessment methods. Although they are correct that this is a new technology, it is not wholly unlike extant interview methods. Rather than examining video-recorded interviews and synchronous online interviewing as two separate phenomena, we should examine how they differ. That is, we can apply frameworks like Potosky's to make specific predictions as to how they might differ. This will allow us to understand the next great innovation in interviewing, whatever that may be. Better yet, we will be in a position to offer advice as to what that next step should be.
Chamorro-Premuzic et al. also address gamification, which, like video-recorded interviewing, is best thought of as an extension of older selection methods. Organizations are drawn to gamified selection tools by a belief that these tools will be more engaging for candidates than traditional selection tools. While much research has been devoted to understanding gamified training and subsequent learning outcomes (Landers & Armstrong, Reference Landers and Armstrong2014; Landers & Callan, Reference Landers, Callan, Ma, Oikonomou and Jain2011; Orvis, Horn, & Belanich, Reference Orvis, Horn and Belanich2008; Wilson et al., Reference Wilson, Bedwell, Lazzara, Salas, Burke, Estock and Conkey2008), virtually no research has focused on gamified selection. The dearth of published research in this area should be viewed as a major hindrance to our ability to fully utilize these tools and be confident in decisions made using them. All is not lost, however; I-O psychologists are experts in understanding motivation and how goals are affected by feedback. We can draw from what we know about traditional selection tools and motivation and apply it to these new approaches to develop a full understanding of how gamified selection tools might affect candidates in different ways.
In gamified selection, there are aspects of the technology that affect candidates in unintended but ultimately scientifically predictable ways. Cognitive load theory suggests that tasks performed in unfamiliar domains are more likely to tax working memory (Kalyuga, Chandler, & Sweller, Reference Kalyuga, Chandler and Sweller2001). That is, when operating in unfamiliar domains, candidates are likely to experience inhibited performance due to increases in extraneous cognitive load (Chandler & Sweller, Reference Chandler and Sweller1991). In line with cognitive load theory, Orvis et al. (Reference Orvis, Horn and Belanich2008) found prior videogame experience resulted in higher motivation and better performance in instructional videogames. Candidates with less videogame experience are likely to be at a disadvantage, but research is needed to investigate these issues in this new context. Recent research on video interviews and technology-mediated communication has further shown how new selection methods might differ from old ones as a product of the unique aspects of the technology that are used. Horn and Behrend (Reference Horn and Behrend2016) found that the presence of the picture-in-picture window, which allows users to see themselves during a video (Skype) interview resulted in higher cognitive load than when the window was not present. The picture-in-picture window acts as an attentional distractor, which increases cognitive load on complex tasks (Baron, Moore, & Sanders, Reference Baron, Moore and Sanders1978). Further, communication through video has been shown to elicit higher levels of cognitive load than audio (Hinds, Reference Hinds1999) or face to face (Ferran & Watts, Reference Ferran and Watts2008). These findings elucidate the need to better understand the impact of particular aspects of selection tools.
Context and Scenery
As we navigate these new roads to selection decisions we must not forget that changes in scenery affect applicants. The convenience of technology often results in organizations becoming lax when it comes to the degree of structure implemented in assessments. As technology provides greater levels of autonomy to candidates, there are more opportunities to introduce error into assessment scores. Gamified tests and video interviews allow users to choose the environment in which they perform. When choosing where to take an assessment one candidate might choose to lounge in a cool quiet room, whereas another candidate who does not have this luxury may instead opt for the precarious stool in the crowded coffee shop down the street. These choices about context are not without consequences, as we have learned countless times from varying angles over the years, such as in research on situational strength (e.g., Meyer, Dalal, & Bonaccio, Reference Meyer, Dalal and Bonaccio2012) and trait activation (e.g., Tett & Burnett, Reference Tett and Burnett2003). Each context could introduce idiosyncratic elements that affect performance in positive or negative ways. Temperature (Halali, Meiran, & Shalev, Reference Halali, Meiran and Shalev2016) and office design (Seddigh et al., Reference Seddigh, Stenfors, Berntsson, Bååth, Sikström and Westerlund2015) are a few examples of contextual factors shown to affect performance on demanding cognitive tasks. Researchers are trained to take environment into account when designing experimental studies, but unfortunately this same care is not always taken in practice. If we are to be confident in decisions made using new tools, we must remember to account for the consequences of the autonomy that comes with them.
Conclusion
Chamorro-Premuzic et al. have successfully given a brief overview of new selection tools. Our mission was to provide a path forward and a way of thinking when researching these tools. If we continue to look at each new technology as a wholly new phenomenon, we will be repeatedly asking the same question—How does new x compare to old y?—without really learning anything in the process. If we instead understand exactly how technologically enhanced selection methods differ from one another, we can then learn more about how the next innovation will affect organizational processes and in turn make more informed decisions. This will allow us to generate an understanding of technology that is more stable and that will be more robust to future technology changes. Finally, we acknowledge that it is impossible for research to keep perfect pace with practice, but we should aim to make that gap as narrow as possible. By improving our understanding of the ways that technologies can affect users, researchers may be more able to keep up with their fellow practitioners in terms of studying and using new innovations. Similarly, practitioners can use this enhanced understanding to take additional steps to control for contextual factors during assessment delivery.