In their focal article, Chamorro-Premuzic, Winsborough, Sherman, and Hogan (Reference Chamorro-Premuzic, Winsborough, Sherman and Hogan2016) provide an overview of a number of new technologies with potentially significant implications for talent management related practices of industrial–organizational (I-O) psychology (both challenges and opportunities) that they label as “new talent signals.” These signals, they argue, are part of a revolution, due to increasing levels of social activity online as well as data-collection and mining techniques that have overtaken the conventional practice of talent identification in organizations. Their position is that these trends are leaving I-O psychologists in the dust in terms of our existing traditional theory, research, and methodologies. Their optimistic tone, however, also seems to suggest that while these approaches “have not yet demonstrated validity comparable with old school methods, they tend to disregard theory, and they pay little attention to the constructs being assessed (Chamorro-Premuzic et al., p. 634)” they may, in fact, have some legitimate basis in the identification of talent. Their point, of course, is that momentum in this area has surpassed human resources (HR), let alone I-O psychology, so that any concerns one might have about these trends are essentially “irrelevant.” This reminds us of a reference to lemmings going off a cliff together.
While we agree with the authors’ observations regarding the emergence of the four basic trends they cite (i.e., digital profiling, social media analytics, big data, and gamification) we're definitely not on the same wavelength when it comes to embracing these signals as new leading indicators of potential simply because they are the latest bright, shiny objects. In fact, we were a little surprised at the apparent level of acceptance and credence the authors give to these methods in their article. Although they close the discussion with some general warnings about the full-scale adoption of these in practice (citing anonymity concerns, cost issues, and possible legal issues), by making the argument in their article that the four methods identified equate to the new forms of traditional validated methods (such as interviewing, resumés, behavioral ratings, and assessment centers), we think they are effectively endorsing the use of these new tools instead of thinking critically about them. We would like to have seen a more disciplined and objective review taken of the issues at hand and of how I-O researchers and practitioners need to study these trends in the short and long term.
The world of consulting is full of fads, trends, and even charlatans promoting their own advice, frameworks, products, and technology, much of which has no basis in theory or research (Dunnette, Reference Dunnette, Dunnette and Hough1990). Unfortunately many HR professionals (particularly those without I-O psychologists influencing their thinking) as well as their clients are wooed by the extreme claims made and jump on the wrong bandwagon (a trend labeled as anti–talent management in a panel discussion led by the lead author of this commentary at the 2015 Society for Industrial and Organizational Psychology (SIOP) annual conference; Church, Reference Church2015). Recent consulting trends regarding the elimination of performance ratings are among these trendy but dangerous fashions that are concerning to many in the field (Pulakos, Mueller Hanson, Arad, & Moye, Reference Pulakos, Mueller Hanson, Arad and Moye2015). Even the rigorous analytical trends that are emerging in the field, such as big data, have sparked recent debates regarding the capability and values-free nature of such approaches (Church & Dutta, Reference Church and Dutta2013; Guzzo, Fink, King, Tonidandel, & Landis, Reference Guzzo, Fink, King, Tonidandel and Landis2015; Rotolo & Church, Reference Rotolo and Church2015).
Fundamentally, we believe that I-O practitioners should take a clearer stance on how we approach these new tools, trends, and technology enhancements. Considering SIOP's tagline of “science for a smarter workplace,” shouldn't we be applying the standards of validity, reliability, and utility as we evaluate new theories, practices, and tools for the profession? Based on the review provided by Chamorro-Premuzic et al., as well our own understanding of the literature in these areas (such as it is), we believe we are far from ready to entertain the use of, let alone recommend the use of, these tools for talent management related applications. In fact, we would argue that I-O academics and researchers should carefully study them, while I-O practitioners thoughtfully defend against their potential misuse—until we are ready to make an informed decision, that is. How do we do that? Listed below are four steps or recommendations that can help align our thinking and develop a common understanding of the value of these approaches for the field in general and talent management in particular.
1. Focus on Real Talent Management Issues
We suggest that the appropriate place to start the discussion is to identify the talent issues or problems that need to be solved or addressed. These should be directly linked to the business strategy (Silzer & Dowell, Reference Silzer and Dowell2010) and reflect future talent requirements and capabilities needed to drive the organization forward (Church, Reference Church2014). Once these challenges have been identified and defined then constructs can be developed and studied.
Chamorro-Premuzic et al. seem to approach their analysis from the technology side first, as if the technology should be driving theory and practice. They suggest as much in their statement, “most innovations in talent identification are the product of the digital revolution” (p. 626). Developing measurement techniques, however, should be several steps later in the process and take a back seat to sound theory development. Both the scientific method and sound I-O practice follow this well-supported and rational path to discovering new knowledge, constructs, tools, and techniques. Although the authors do discuss the basic heuristics for differentiating talent (much of which is consistent with the Leadership Potential BluePrint, our approach to high-potential identification as outlined in Church & Silzer, Reference Church and Silzer2014; Silzer & Church, Reference Silzer and Church2009), they do not directly link their discussion to the application of new technologies. In the absence of such conceptual and empirically based linkages, emerging technologies will eventually be sorted out in the public domain based on the usefulness of the program. Not every new concept heralded as the next big thing has become embedded in the culture (does anyone remember the Newton PDA from Apple or the DIVX video system from Circuit City? See Haskin, Reference Haskin2007), and we should reflect on that as some of these tools achieve the equivalent of “Millennials are the latest thing” level of hype in industry settings.
In general, I-O psychologists prefer to take a more professional and rational approach, first focusing on defining the need or issue at hand and then working systematically to get to results and solutions. There is a big difference between designing a program and then figuring out the problem to which it applies, and identifying the problem first (or issue or need) and then developing constructs, tools, or programs to solve it. We are not (or should not be if we embraced our calling as scientist–practitioners) a fad-driven profession, and the Internet is full of junk tools that have no validity or usefulness. Moreover, a host of approaches and means to test for this exist (Scott & Reynolds, Reference Scott and Reynolds2010), so why not begin to apply them?
2. Use a Relevant Talent Management Model
Once the problem is identified the next step is to determine the theories, models, and constructs that are relevant to the need or issue. Simply suggesting that you can “classify individuals as more or less talented” is a limited approach to talent management and reflects the state of the field perhaps 20–30 years ago. Although the layperson may still think this way, most business organizations are far beyond that now. The most sophisticated organizations have systematic and integrated approaches and use an overall model or framework that provides a shared approach and understanding to talent management practices (Silzer & Dowell, Reference Silzer and Dowell2010). Recent benchmarking research has shown that top companies are implementing integrated talent management systems and moving toward more multitrait, multimethod measurement applications in their assessment programs (Church & Rotolo, Reference Church and Rotolo2013; Church, Rotolo, Ginther, & Levine, Reference Church, Rotolo, Ginther and Levine2015), whether these are designed to focus on areas to target their “buy versus build” strategies (Cappelli, Reference Cappelli2008), the segmentation of roles for critical strategic investments (Boudreau & Ramstad, Reference Boudreau and Ramstad2007), or the assessment and development high potentials (Silzer & Church, Reference Silzer, Church, Silzer and Dowell2010).
Randomly pursuing new fads or ideas, on the other hand, is usually a significant waste of organizational resources and may even put the organization at risk. There is a good reason why these shiny objects are often questioned, particularly in successful organizations that are on the leading edge of talent management practice. Budget and people resources need to be reserved for useful and valid initiatives.
Let's take a few of the examples provided by the focal authors. Although at first blush it sounds innovative that Uber is using digital interviewing technology to test “potential drivers exclusively via their smartphones,” given recent events in the news regarding various harassment claims (e.g., Draper, Reference Draper2014; Edwards, Reference Edwards2014) and even worse, with some of their drivers, it raises the question of whether the company is misusing the tool, measuring the wrong attributes, or both. Similarly, although using professional networking sites such as LinkedIn to find relevant resumés may make sense for some organizations, particularly if they do not have strong talent acquisition and sourcing programs of their own, exploring entirely new application fit-hiring concepts based on mobile dating apps such as Tinder or eHarmony is highly questionable at the present time. We would make the same argument about using e-mails to make judgments regarding intelligence levels and personality dispositions. Similarly, whereas the use of advanced and highly engaging online simulation models can be designed in such a way as to provide meaningful assessment information (e.g., PepsiCo employs such an application, i.e., the day in the life of a new CEO, as part of its Senior Leadership Development Center), fads such as gamification are unlikely to be accepted as standard talent management tools until their usefulness and validity have been firmly established. There is still an argument to be made for having job relevance in a simulation (or game). Although all of these new technologies can sound really “cool” and cutting edge to someone in HR, they can also lead to a host of potential issues and risk for the organization, given our lack of knowledge and visibility to the potential downsides of these technologies.
In short, leading companies are using models that provide useful and valid frameworks for integrating their talent management efforts. For example, the Leadership Potential BluePrint (Church & Silzer, Reference Church and Silzer2014) is a model that is being used in major corporations such as PepsiCo, Eli Lily, Citibank, and others to provide such a holistic framework. It has been used as the basis for various consulting approaches as well as scholar–practitioner models and reviews of potential in various publications (e.g., Aon-Hewitt, 2013; MacRae & Furnham, Reference MacRae and Furnham2014; Piip & Harris, Reference Piip, Harris, Harris and Short2014). In addition, it was recently featured in a white paper on leadership development (Dugan & O'Shea, Reference Dugan and O'Shea2014) published jointly by the Society for Human Resource Management (SHRM) and the SIOP. Although there are other options as well, such as Boudreau and Ramstad's (Reference Boudreau and Ramstad2007) HC Bridge Model, the point is that a talent management framework should be used to integrate the various components, not the components used to find a framework to latch onto.
3. Apply a Social Systems Perspective to Talent Management
Related to the need to have a relevant and integrated talent management model is the importance of taking a broader social systems perspective to talent management. The marketplace today is littered with individual tools and methodologies targeted at different constructs and ideas, some of which have merit and many of which are totally ungrounded. The broader challenge, however, is understanding how these concepts connect together in the context of the total organization. Just because senior leadership has a desire to implement the latest fad in performance management or talent identification using big data applications doesn't mean that approach will be effective or useful in their organization. Beyond just the unknown measurement attributes of the talent signals Chamorro-Premuzic et al. note are questions we have regarding the degree to which the technology will have an impact with the broader social system. In many ways this reflects a key difference in mindset (Church, Reference Church2013, Reference Church2014) between thinking about broad based organization development (OD) implications and more individualistic talent management, differentiation, segmentation, and succession-planning processes. It's the “I” versus the “O” in our field. So far we've been talking about the “I,” but the “O” can't be ignored either. Former SIOP President Jim Farr pointed this out in his presidential address and The Industrial–Organizational Psychologist column back in Reference Farr1997, and it seems things haven't change all that much in 19 odd years.
For example, what are the implications of using meeting room scheduling, phone records, e-mail length, and wordiness to make “talent” based decisions on other organizational factors such as work group climate, leadership and managerial behaviors, employee motivation, organizational values, and even productivity? Further, how might the adoption of new technologies (particularly when unproven and with little to no research to support their real world application) align or conflict with formal reward and recognition systems, communication processes, organizational structure, existing processes for talent reviews, or learning? Although social systems thinking is almost second nature to I-O psychologists (with backgrounds in social psychology and familiarity with Katz & Kahn, Reference Katz and Kahn1978, or OD and classic models such as that of Burke & Litwin, Reference Burke and Litwin1992), many academics and practitioners today are focused on a very micro talent perspective. This tendency for myopic and inward thinking can be even more pronounced in the broader HR business partner community (Boudreau & Rice, Reference Boudreau and Rice2015; Ulrich, Reference Ulrich1997).
Unfortunately, we see some of that same narrow mindset applied to the discussion of talent signals in the focal article. It's ironic to us that the approach taken to the discussion of seemingly expansive data technologies in talent identification is presented in ways that are in fact very singular and individual focused in orientation (i.e., segmenting talent and fit based on combinations of individual data points). It's as if the organizational implications of implementing some of these technologies (again beyond the absence of validity itself at the level of measurement) are not much of a concern. Although this may not have been what was intended by the focal authors, the focus is entirely at the individual descriptive level. As scientist–practitioners, we believe firmly that our role is to help organizations design and evaluate processes that span all levels in the organization from the individual to the organization as an entity. Thus, we would like to see more discussion and debate (as well as theory and research) done at the systems level as well before we endorse the adoption of such technologies as new potential “replacements” for existing siloed applications.
4. Take a Normative Perspective to Talent Management
The final step to helping the field make the transition from signals to valid talent management systems is to reflect on the degree to which there should be a normative component to the use of new technologies. I-O psychology as a field is founded in some guiding principles about enhancing the effectiveness of organizations while also making them better for the people who work there (www.siop.org). This is one of the reasons why the individual/leadership development components of our work are so important to the field (e.g., McCauley & McCall, Reference McCauley and McCall2014) beyond just talent identification and selection. We know that Chamorro-Premuzic et al. agree with us in spirit; however, the discussion of talent signals only vaguely calls into the question the values aspect of promoting technology (and big data) applications. Although privacy and anonymity concerns are indeed real and need to be addressed as others have discussed (e.g., Guzzo et al., Reference Guzzo, Fink, King, Tonidandel and Landis2015), the next layer in the equation is one of relevance and even appropriateness of the focus on data-based segmentation in its purest form. Rotolo and Church (Reference Rotolo and Church2015) framed this issue as the need for organizations to ensure that validity, valance, and values are applied to this type of technology-driven analytics work going forward.
Let's take the example offered by the focal authors. The utility of predictive models of talent based on the self-reported consumption of curly fries might be interesting to big data people, but the utility to I-O practitioners, HR professionals, and line leaders is almost nil if there is no normative or relevant content involved (one exception might be if the company in question actually made the curly fries). The example given of curly fries is the epitome of what we might call “values-free analytics.” It's an intriguing insight intellectually but entirely meaningless for practice. HR professionals and their business clients are not about to use random bits of predictive insights to make bench slating or promotion decisions, and if they did, given the example cited by focal authors, they would be wrong as soon as the algorithm had been revealed to the employee base. The increasing pressure on organizations to be transparent regarding how they measure leadership potential and where people stand on the list (Church et al., Reference Church, Rotolo, Ginther and Levine2015) will only make matters worse. Underlying attributes such as those measured by the BluePrint (e.g., cognitive skills, personality, motivation, learning) are much harder to fake than a self-reported “like” personal preference toward a particular item on Facebook or any other social media outlet.
As we noted earlier, organizations need to focus their energy on real issues with relevant and valid models based on underlying constructs. Effective high-potential identification, succession planning, and talent management in general are not about using random predictive variables in review meetings but about making decisions based on knowledge, skills, and capabilities as aligned to given current and future state leadership requirements matched to an organization's broader business strategy (Church, Reference Church2014; Silzer, Reference Silzer2002). We would like to have seen more of this higher level of discussion in the article since it raises so many possibilities for further debate.
Related to the point of values-free insights is the question of the impact of the method of measurement on the subject (and what types of demand characteristics are created). We think this may be worthy of studying and conceptualizing further as well. How do we know what the short- and long-term effects are or will be of the technology being used on the attributes of the talent using it? If people learn that there is an association between potential and curly fries then more people will want curly fries. What are the long-term implications for individuals, organizations, and even society of that change in behavior that someone in a big data role has driven through their statistical model? Have we even thought about that?
In other words, although the assumption is that social media predicts existing models and capabilities, if we buy various theories of evolution and sociology, as well as generational differences, it is possible that the use of these tools may ultimately change the way people behavior, learn, adapt, engage, and so forth. So should we actually be using new methods to predict old models/capabilities anyway? Or should we be looking to the future somehow? For example, the authors noted a link between higher word counts and intelligence. Yet we know that communication technology has dramatically changed the way we are communicating in general; language use is changing in fact. So, will these magical predictive talent models adapt to the use of shorthand (LOL, IMHO, WTF) or even emojis? Have they already? For some people, learning to read emojis is as complex as learning a foreign language. The bottom line is we don't know the answers to any of these questions. In our opinion there is a real need for research on the impact of new technologies on the talent itself not just what it says about the talent.
Summary
In summary, although we enjoyed the article by Chamorro-Premuzic et al., we are clearly not on the same wavelength as these authors with respect to what these new talent signals represent for the field of I-O psychology yet. Although many of the potential and emerging applications may be intriguing (and even “cool”) to us personally, as professionals in an science-based discipline, we believe the field needs to focus more time and attention on them before we recommend, or even allow ourselves to acquiesce to the adoption of, these new technologies in potentially invalid, harmful, and costly ways.
Organizations need guidance from I-O psychologists to enable them to make informed choices about which practices to adopt and why. Although some of these technologies producing “talent signals” may ultimately contribute significantly new information and, as a result, change the way in which future talent is assessed and decisions are made, we believe a more thoughtful and planned approach should be taken in the field for evaluating the merits of these before heralding their arrival as a fait accompli. Until there is sound psychological theory, empirical research, clear guidance for application, and an understanding of the broader individual and organizational outcomes, we recommend steering clear of many of these shiny new objects in practice anyway. As for our academic and research colleagues, however, we would welcome your energy directed to these areas.
In their focal article, Chamorro-Premuzic, Winsborough, Sherman, and Hogan (Reference Chamorro-Premuzic, Winsborough, Sherman and Hogan2016) provide an overview of a number of new technologies with potentially significant implications for talent management related practices of industrial–organizational (I-O) psychology (both challenges and opportunities) that they label as “new talent signals.” These signals, they argue, are part of a revolution, due to increasing levels of social activity online as well as data-collection and mining techniques that have overtaken the conventional practice of talent identification in organizations. Their position is that these trends are leaving I-O psychologists in the dust in terms of our existing traditional theory, research, and methodologies. Their optimistic tone, however, also seems to suggest that while these approaches “have not yet demonstrated validity comparable with old school methods, they tend to disregard theory, and they pay little attention to the constructs being assessed (Chamorro-Premuzic et al., p. 634)” they may, in fact, have some legitimate basis in the identification of talent. Their point, of course, is that momentum in this area has surpassed human resources (HR), let alone I-O psychology, so that any concerns one might have about these trends are essentially “irrelevant.” This reminds us of a reference to lemmings going off a cliff together.
While we agree with the authors’ observations regarding the emergence of the four basic trends they cite (i.e., digital profiling, social media analytics, big data, and gamification) we're definitely not on the same wavelength when it comes to embracing these signals as new leading indicators of potential simply because they are the latest bright, shiny objects. In fact, we were a little surprised at the apparent level of acceptance and credence the authors give to these methods in their article. Although they close the discussion with some general warnings about the full-scale adoption of these in practice (citing anonymity concerns, cost issues, and possible legal issues), by making the argument in their article that the four methods identified equate to the new forms of traditional validated methods (such as interviewing, resumés, behavioral ratings, and assessment centers), we think they are effectively endorsing the use of these new tools instead of thinking critically about them. We would like to have seen a more disciplined and objective review taken of the issues at hand and of how I-O researchers and practitioners need to study these trends in the short and long term.
The world of consulting is full of fads, trends, and even charlatans promoting their own advice, frameworks, products, and technology, much of which has no basis in theory or research (Dunnette, Reference Dunnette, Dunnette and Hough1990). Unfortunately many HR professionals (particularly those without I-O psychologists influencing their thinking) as well as their clients are wooed by the extreme claims made and jump on the wrong bandwagon (a trend labeled as anti–talent management in a panel discussion led by the lead author of this commentary at the 2015 Society for Industrial and Organizational Psychology (SIOP) annual conference; Church, Reference Church2015). Recent consulting trends regarding the elimination of performance ratings are among these trendy but dangerous fashions that are concerning to many in the field (Pulakos, Mueller Hanson, Arad, & Moye, Reference Pulakos, Mueller Hanson, Arad and Moye2015). Even the rigorous analytical trends that are emerging in the field, such as big data, have sparked recent debates regarding the capability and values-free nature of such approaches (Church & Dutta, Reference Church and Dutta2013; Guzzo, Fink, King, Tonidandel, & Landis, Reference Guzzo, Fink, King, Tonidandel and Landis2015; Rotolo & Church, Reference Rotolo and Church2015).
Fundamentally, we believe that I-O practitioners should take a clearer stance on how we approach these new tools, trends, and technology enhancements. Considering SIOP's tagline of “science for a smarter workplace,” shouldn't we be applying the standards of validity, reliability, and utility as we evaluate new theories, practices, and tools for the profession? Based on the review provided by Chamorro-Premuzic et al., as well our own understanding of the literature in these areas (such as it is), we believe we are far from ready to entertain the use of, let alone recommend the use of, these tools for talent management related applications. In fact, we would argue that I-O academics and researchers should carefully study them, while I-O practitioners thoughtfully defend against their potential misuse—until we are ready to make an informed decision, that is. How do we do that? Listed below are four steps or recommendations that can help align our thinking and develop a common understanding of the value of these approaches for the field in general and talent management in particular.
1. Focus on Real Talent Management Issues
We suggest that the appropriate place to start the discussion is to identify the talent issues or problems that need to be solved or addressed. These should be directly linked to the business strategy (Silzer & Dowell, Reference Silzer and Dowell2010) and reflect future talent requirements and capabilities needed to drive the organization forward (Church, Reference Church2014). Once these challenges have been identified and defined then constructs can be developed and studied.
Chamorro-Premuzic et al. seem to approach their analysis from the technology side first, as if the technology should be driving theory and practice. They suggest as much in their statement, “most innovations in talent identification are the product of the digital revolution” (p. 626). Developing measurement techniques, however, should be several steps later in the process and take a back seat to sound theory development. Both the scientific method and sound I-O practice follow this well-supported and rational path to discovering new knowledge, constructs, tools, and techniques. Although the authors do discuss the basic heuristics for differentiating talent (much of which is consistent with the Leadership Potential BluePrint, our approach to high-potential identification as outlined in Church & Silzer, Reference Church and Silzer2014; Silzer & Church, Reference Silzer and Church2009), they do not directly link their discussion to the application of new technologies. In the absence of such conceptual and empirically based linkages, emerging technologies will eventually be sorted out in the public domain based on the usefulness of the program. Not every new concept heralded as the next big thing has become embedded in the culture (does anyone remember the Newton PDA from Apple or the DIVX video system from Circuit City? See Haskin, Reference Haskin2007), and we should reflect on that as some of these tools achieve the equivalent of “Millennials are the latest thing” level of hype in industry settings.
In general, I-O psychologists prefer to take a more professional and rational approach, first focusing on defining the need or issue at hand and then working systematically to get to results and solutions. There is a big difference between designing a program and then figuring out the problem to which it applies, and identifying the problem first (or issue or need) and then developing constructs, tools, or programs to solve it. We are not (or should not be if we embraced our calling as scientist–practitioners) a fad-driven profession, and the Internet is full of junk tools that have no validity or usefulness. Moreover, a host of approaches and means to test for this exist (Scott & Reynolds, Reference Scott and Reynolds2010), so why not begin to apply them?
2. Use a Relevant Talent Management Model
Once the problem is identified the next step is to determine the theories, models, and constructs that are relevant to the need or issue. Simply suggesting that you can “classify individuals as more or less talented” is a limited approach to talent management and reflects the state of the field perhaps 20–30 years ago. Although the layperson may still think this way, most business organizations are far beyond that now. The most sophisticated organizations have systematic and integrated approaches and use an overall model or framework that provides a shared approach and understanding to talent management practices (Silzer & Dowell, Reference Silzer and Dowell2010). Recent benchmarking research has shown that top companies are implementing integrated talent management systems and moving toward more multitrait, multimethod measurement applications in their assessment programs (Church & Rotolo, Reference Church and Rotolo2013; Church, Rotolo, Ginther, & Levine, Reference Church, Rotolo, Ginther and Levine2015), whether these are designed to focus on areas to target their “buy versus build” strategies (Cappelli, Reference Cappelli2008), the segmentation of roles for critical strategic investments (Boudreau & Ramstad, Reference Boudreau and Ramstad2007), or the assessment and development high potentials (Silzer & Church, Reference Silzer, Church, Silzer and Dowell2010).
Randomly pursuing new fads or ideas, on the other hand, is usually a significant waste of organizational resources and may even put the organization at risk. There is a good reason why these shiny objects are often questioned, particularly in successful organizations that are on the leading edge of talent management practice. Budget and people resources need to be reserved for useful and valid initiatives.
Let's take a few of the examples provided by the focal authors. Although at first blush it sounds innovative that Uber is using digital interviewing technology to test “potential drivers exclusively via their smartphones,” given recent events in the news regarding various harassment claims (e.g., Draper, Reference Draper2014; Edwards, Reference Edwards2014) and even worse, with some of their drivers, it raises the question of whether the company is misusing the tool, measuring the wrong attributes, or both. Similarly, although using professional networking sites such as LinkedIn to find relevant resumés may make sense for some organizations, particularly if they do not have strong talent acquisition and sourcing programs of their own, exploring entirely new application fit-hiring concepts based on mobile dating apps such as Tinder or eHarmony is highly questionable at the present time. We would make the same argument about using e-mails to make judgments regarding intelligence levels and personality dispositions. Similarly, whereas the use of advanced and highly engaging online simulation models can be designed in such a way as to provide meaningful assessment information (e.g., PepsiCo employs such an application, i.e., the day in the life of a new CEO, as part of its Senior Leadership Development Center), fads such as gamification are unlikely to be accepted as standard talent management tools until their usefulness and validity have been firmly established. There is still an argument to be made for having job relevance in a simulation (or game). Although all of these new technologies can sound really “cool” and cutting edge to someone in HR, they can also lead to a host of potential issues and risk for the organization, given our lack of knowledge and visibility to the potential downsides of these technologies.
In short, leading companies are using models that provide useful and valid frameworks for integrating their talent management efforts. For example, the Leadership Potential BluePrint (Church & Silzer, Reference Church and Silzer2014) is a model that is being used in major corporations such as PepsiCo, Eli Lily, Citibank, and others to provide such a holistic framework. It has been used as the basis for various consulting approaches as well as scholar–practitioner models and reviews of potential in various publications (e.g., Aon-Hewitt, 2013; MacRae & Furnham, Reference MacRae and Furnham2014; Piip & Harris, Reference Piip, Harris, Harris and Short2014). In addition, it was recently featured in a white paper on leadership development (Dugan & O'Shea, Reference Dugan and O'Shea2014) published jointly by the Society for Human Resource Management (SHRM) and the SIOP. Although there are other options as well, such as Boudreau and Ramstad's (Reference Boudreau and Ramstad2007) HC Bridge Model, the point is that a talent management framework should be used to integrate the various components, not the components used to find a framework to latch onto.
3. Apply a Social Systems Perspective to Talent Management
Related to the need to have a relevant and integrated talent management model is the importance of taking a broader social systems perspective to talent management. The marketplace today is littered with individual tools and methodologies targeted at different constructs and ideas, some of which have merit and many of which are totally ungrounded. The broader challenge, however, is understanding how these concepts connect together in the context of the total organization. Just because senior leadership has a desire to implement the latest fad in performance management or talent identification using big data applications doesn't mean that approach will be effective or useful in their organization. Beyond just the unknown measurement attributes of the talent signals Chamorro-Premuzic et al. note are questions we have regarding the degree to which the technology will have an impact with the broader social system. In many ways this reflects a key difference in mindset (Church, Reference Church2013, Reference Church2014) between thinking about broad based organization development (OD) implications and more individualistic talent management, differentiation, segmentation, and succession-planning processes. It's the “I” versus the “O” in our field. So far we've been talking about the “I,” but the “O” can't be ignored either. Former SIOP President Jim Farr pointed this out in his presidential address and The Industrial–Organizational Psychologist column back in Reference Farr1997, and it seems things haven't change all that much in 19 odd years.
For example, what are the implications of using meeting room scheduling, phone records, e-mail length, and wordiness to make “talent” based decisions on other organizational factors such as work group climate, leadership and managerial behaviors, employee motivation, organizational values, and even productivity? Further, how might the adoption of new technologies (particularly when unproven and with little to no research to support their real world application) align or conflict with formal reward and recognition systems, communication processes, organizational structure, existing processes for talent reviews, or learning? Although social systems thinking is almost second nature to I-O psychologists (with backgrounds in social psychology and familiarity with Katz & Kahn, Reference Katz and Kahn1978, or OD and classic models such as that of Burke & Litwin, Reference Burke and Litwin1992), many academics and practitioners today are focused on a very micro talent perspective. This tendency for myopic and inward thinking can be even more pronounced in the broader HR business partner community (Boudreau & Rice, Reference Boudreau and Rice2015; Ulrich, Reference Ulrich1997).
Unfortunately, we see some of that same narrow mindset applied to the discussion of talent signals in the focal article. It's ironic to us that the approach taken to the discussion of seemingly expansive data technologies in talent identification is presented in ways that are in fact very singular and individual focused in orientation (i.e., segmenting talent and fit based on combinations of individual data points). It's as if the organizational implications of implementing some of these technologies (again beyond the absence of validity itself at the level of measurement) are not much of a concern. Although this may not have been what was intended by the focal authors, the focus is entirely at the individual descriptive level. As scientist–practitioners, we believe firmly that our role is to help organizations design and evaluate processes that span all levels in the organization from the individual to the organization as an entity. Thus, we would like to see more discussion and debate (as well as theory and research) done at the systems level as well before we endorse the adoption of such technologies as new potential “replacements” for existing siloed applications.
4. Take a Normative Perspective to Talent Management
The final step to helping the field make the transition from signals to valid talent management systems is to reflect on the degree to which there should be a normative component to the use of new technologies. I-O psychology as a field is founded in some guiding principles about enhancing the effectiveness of organizations while also making them better for the people who work there (www.siop.org). This is one of the reasons why the individual/leadership development components of our work are so important to the field (e.g., McCauley & McCall, Reference McCauley and McCall2014) beyond just talent identification and selection. We know that Chamorro-Premuzic et al. agree with us in spirit; however, the discussion of talent signals only vaguely calls into the question the values aspect of promoting technology (and big data) applications. Although privacy and anonymity concerns are indeed real and need to be addressed as others have discussed (e.g., Guzzo et al., Reference Guzzo, Fink, King, Tonidandel and Landis2015), the next layer in the equation is one of relevance and even appropriateness of the focus on data-based segmentation in its purest form. Rotolo and Church (Reference Rotolo and Church2015) framed this issue as the need for organizations to ensure that validity, valance, and values are applied to this type of technology-driven analytics work going forward.
Let's take the example offered by the focal authors. The utility of predictive models of talent based on the self-reported consumption of curly fries might be interesting to big data people, but the utility to I-O practitioners, HR professionals, and line leaders is almost nil if there is no normative or relevant content involved (one exception might be if the company in question actually made the curly fries). The example given of curly fries is the epitome of what we might call “values-free analytics.” It's an intriguing insight intellectually but entirely meaningless for practice. HR professionals and their business clients are not about to use random bits of predictive insights to make bench slating or promotion decisions, and if they did, given the example cited by focal authors, they would be wrong as soon as the algorithm had been revealed to the employee base. The increasing pressure on organizations to be transparent regarding how they measure leadership potential and where people stand on the list (Church et al., Reference Church, Rotolo, Ginther and Levine2015) will only make matters worse. Underlying attributes such as those measured by the BluePrint (e.g., cognitive skills, personality, motivation, learning) are much harder to fake than a self-reported “like” personal preference toward a particular item on Facebook or any other social media outlet.
As we noted earlier, organizations need to focus their energy on real issues with relevant and valid models based on underlying constructs. Effective high-potential identification, succession planning, and talent management in general are not about using random predictive variables in review meetings but about making decisions based on knowledge, skills, and capabilities as aligned to given current and future state leadership requirements matched to an organization's broader business strategy (Church, Reference Church2014; Silzer, Reference Silzer2002). We would like to have seen more of this higher level of discussion in the article since it raises so many possibilities for further debate.
Related to the point of values-free insights is the question of the impact of the method of measurement on the subject (and what types of demand characteristics are created). We think this may be worthy of studying and conceptualizing further as well. How do we know what the short- and long-term effects are or will be of the technology being used on the attributes of the talent using it? If people learn that there is an association between potential and curly fries then more people will want curly fries. What are the long-term implications for individuals, organizations, and even society of that change in behavior that someone in a big data role has driven through their statistical model? Have we even thought about that?
In other words, although the assumption is that social media predicts existing models and capabilities, if we buy various theories of evolution and sociology, as well as generational differences, it is possible that the use of these tools may ultimately change the way people behavior, learn, adapt, engage, and so forth. So should we actually be using new methods to predict old models/capabilities anyway? Or should we be looking to the future somehow? For example, the authors noted a link between higher word counts and intelligence. Yet we know that communication technology has dramatically changed the way we are communicating in general; language use is changing in fact. So, will these magical predictive talent models adapt to the use of shorthand (LOL, IMHO, WTF) or even emojis? Have they already? For some people, learning to read emojis is as complex as learning a foreign language. The bottom line is we don't know the answers to any of these questions. In our opinion there is a real need for research on the impact of new technologies on the talent itself not just what it says about the talent.
Summary
In summary, although we enjoyed the article by Chamorro-Premuzic et al., we are clearly not on the same wavelength as these authors with respect to what these new talent signals represent for the field of I-O psychology yet. Although many of the potential and emerging applications may be intriguing (and even “cool”) to us personally, as professionals in an science-based discipline, we believe the field needs to focus more time and attention on them before we recommend, or even allow ourselves to acquiesce to the adoption of, these new technologies in potentially invalid, harmful, and costly ways.
Organizations need guidance from I-O psychologists to enable them to make informed choices about which practices to adopt and why. Although some of these technologies producing “talent signals” may ultimately contribute significantly new information and, as a result, change the way in which future talent is assessed and decisions are made, we believe a more thoughtful and planned approach should be taken in the field for evaluating the merits of these before heralding their arrival as a fait accompli. Until there is sound psychological theory, empirical research, clear guidance for application, and an understanding of the broader individual and organizational outcomes, we recommend steering clear of many of these shiny new objects in practice anyway. As for our academic and research colleagues, however, we would welcome your energy directed to these areas.