Why do organizations conduct job interviews, despite the enormous costs associated with the interview process? At first blush, this does not seem like an especially challenging question. This is because a natural and seemingly obvious answer immediately comes to mind: interviews are for predicting a candidate’s future performance and fit with respect to the hiring organization’s requirements, values, and culture—that’s why organizations conduct interviews, despite their costs (Cappelli, Reference Cappelli2019b; Elfenbein & Sterling, Reference Elfenbein and Sterling2018; Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018; Society for Human Resource Management [SHRM], 2017). This is also the traditional view of interviewing espoused by managers and is how the nature and function of interviews are characterized in human resource management (HRM) textbooks (Dessler, Reference Dessler2020; Mathis, Jackson, Valentine, & Meglich, Reference Mathis, Jackson, Valentine and Meglich2016; Mondy & Martocchio, Reference Mondy and Martocchio2016).Footnote 1 Thus, although the costs may be undesirable, they are the price to pay, as it were, to be able to judge whether a candidate will match the needs of the role and the organization.Footnote 2
In this article, we suggest that the question of why to conduct interviews is a more difficult one than it first seems. The force of this question can be appreciated when juxtaposed against a twofold threat we argue the traditional view of interviewing faces. The first threat, the behavioral threat, holds that a large body of behavioral evidence suggests that we are poor predicters of future performance and bad judges of fit. This is for multiple reasons: the judgments of interviewers are riddled with biases, interviewers overestimate their assessment capacities, and organizations rarely assess the performance of candidates they might have passed on (in relation to the candidates they ultimately selected). As one HRM textbook notes, “traditionally, interviews have not been valid predictors of success on the job” (Mondy & Martocchio, Reference Mondy and Martocchio2016: 165). In short, those involved in making hiring decisions are demonstrably bad at predicting future performance and assessing fit.
The behavioral threat has brought some management theorists to suggest abandoning interviews as traditionally conceived (i.e., unstructured interviews) and moving toward structured interviews. Yet structured interviews, too, face problems: they can collapse into unstructured interviews, or alternatively, they start out unstructured either before or after the official start of the interview and, in doing so, increase exposure to the behavioral threat. More fundamentally, the behavioral threat is simply pushed back one step, to the point at which one decides the structure of the interview. Thus, although structured interviews may be an improvement upon unstructured interviews, they, too, do not fare especially well with respect to the behavioral threat.
A defender of the traditional view might acknowledge the force of the behavioral threat yet still respond, “We have no better alternative!” But this argumentative maneuver is cut off by the second threat the traditional view faces: the algorithmic threat. Algorithms already have a superior track record to humans, even expert humans, of predicting the performance and fit of candidates in a number of domains. Indeed, 67 percent of eighty-eight hundred recruiters and hiring managers globally surveyed by LinkedIn in 2018 noted that they use artificial intelligence (AI) tools to save time in sourcing and screening candidates (Ignatova & Reilly, Reference Ignatova and Reilly2018). So, where does this leave the practice of interviewing?
The behavioral and algorithmic threats, taken together, pose what we call the “interview puzzle” for the traditional view of interviewing. If the traditional view is correct about the nature and function of interviews—that interviews are for predicting the future performance and fit of a candidate with respect to the role’s and organization’s needs—then it seems as though the justification for the practice is undermined. Not only is interviewing costly (Cappelli, Reference Cappelli2020; Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018; SHRM, 2017) but we also are bad at it, and we may have better alternatives for predicting performance and fit (i.e., algorithms). Continuing to interview, then, if it is only about predicting performance and fit, seems to be at best an anachronistic human resources (HR) practice or at worst blatant wastefulness sustained by irrational managerial overconfidence. For these reasons, we argue that the traditional view of interviewing must be reexamined.
If interviews were singularly a means to predicting performance and fit, as the traditional view posits, we maintain that the justification for interviews is undermined. However, we argue that the antecedent in this conditional is false: interviews are not singularly a means to predicting performance and fit; rather, they are a much richer normative practice. In particular, we argue that interviews offer different kinds of value that have thus far been overlooked and thus the practice can be worth preserving, despite the behavioral and algorithmic threats. Something of normative significance would be lost were we to abandon the practice of interviewing, and this must be accounted for in our understanding of the nature of interviews.
In other words, we dissolve the interview puzzle by arguing that although the behavioral and algorithmic threats are indeed concerning, they only threaten to undermine our interview practices if the traditional view of interviewing is the whole story. But we argue that the traditional view of interviewing accounts for only part of its function—the parts it overlooks are the other kinds of value that interviews create, and these other kinds of value do not succumb to the behavioral and algorithmic threats. By reframing how we understand the nature of interviews, we advance a broader, normative conception of interviewing that suggests that our ability to choose whom we relate to in the workplace is an important source of value and that our work lives may be worse off without the practice.
We proceed as follows. In section 1, we characterize the traditional view of interviewing and discuss the costs of interviewing that are exhaustively documented in the HRM literature. In section 2, we discuss the behavioral and algorithmic threats and argue that they together undermine the traditional view of interviewing and thus generate the interview puzzle. In section 3, we introduce our value of choice theory of interviewing, grounded in the work of the philosopher T. M. Scanlon (Reference Scanlon and McMurrin1988, Reference Scanlon1998, Reference Scanlon2013, Reference Scanlon2019). We show how the interview puzzle can be dissolved once we grasp the inadequacy of the traditional view of interviewing: it fails to account for a broader range of contenders for the kinds of value that can be realized through interviewing. If the view we advance is correct, then the current understanding in HRM and management scholarship about the nature and function of interviews must be significantly expanded. In section 4, we offer several clarifications of our account and discuss some potential objections. In section 5, we discuss some new avenues of research that follow from our work. Finally, in section 6, we conclude.
1. THE TRADITIONAL VIEW OF INTERVIEWING
The traditional view of interviewing holds that interviews are one class of selection tools (among other tools, such as tests and background checks) that are useful for predicting a candidate’s performance and fit.Footnote 3 In particular, a selection interview is defined as “a selection procedure designed to predict future job performance based on applicants’ oral responses to oral inquiries” (Dessler, Reference Dessler2020: 207) and is considered a tool for assessing a candidate’s knowledge, skills, abilities, and competencies in relation to what is required for the job (Dessler, Reference Dessler2020; Graves & Karren, Reference Graves and Karren1996; McDaniel, Whetzel, Schmidt, & Maurer, Reference McDaniel, Whetzel, Schmidt and Maurer1994).
Interviews are widespread, in part, because of the belief that they are effective in simultaneously assessing candidates’ ability, motivation, personality, aptitude, person–job fit, and person–organization fit (Highhouse, Reference Highhouse2008). Several common assumptions sustain this belief: that making accurate predictions about candidates’ future job performance is possible (Highhouse, Reference Highhouse2008); that experience and intuition are necessary in effective hiring (Gigerenzer, Reference Gigerenzer2007); that human beings (i.e., candidates) can be effectively evaluated only by equally sensitive complex beings (e.g., hiring managers), rather than by tests or algorithms (Highhouse, Reference Highhouse2008); and that oral discussions with candidates can be revealing, as they allow for “reading between the lines” (Highhouse, Reference Highhouse2008: 337).
Despite the widespread use of interviews, they are recognized to be a costly and time-consuming practice. The United States “fills a staggering 66 million jobs a year. Most of the $20 billion that companies spend on human resources vendors goes to hiring” (Cappelli, Reference Cappelli2019b: 50). On average, employers in the United States spend approximately $4,000 per hire to fill non-executive-level positions and about $15,000 per hire to fill executive-level positions (SHRM, 2016, 2017), and a substantial portion of these costs is attributed to interviews. Outside the United States, employers report similar experiences. For example, in Switzerland, on average, employers spend as much as 16 weeks of wage payments to fill a skilled worker vacancy, of which 21 percent involves search costs, and roughly 50 percent of the search costs are direct interview costs (Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018). In addition, significant opportunity costs are associated with interviews for all parties involved (Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018).
With respect to the time spent on interviews, according to a recent talent acquisition benchmarking report, on average per job, US employers spend approximately eight days conducting interviews (SHRM, 2017). Employers report similar experiences outside the United States. For example, in Switzerland, on average, employers spend approximately 8.5 hours on job interviews per candidate (Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018).
Of course, the costs of hiring and interviewing are not uniform. The costs vary depending on the skill requirements of the job (Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018) and the degree of labor market tightness (Davis, Faberman, & Haltiwanger, Reference Davis, Faberman and Haltiwanger2012; Pissarides, Reference Pissarides2009; Rogerson & Shimer, Reference Rogerson, Shimer, Ashenfelter and Card2011), among other factors. That said, these costs on average remain substantial and are increasing—employers today spend twice as much time on interviews as they did in 2009 (Cappelli, Reference Cappelli2019b).Footnote 4
As costly and time consuming as interviews are, there are also difficulties associated with verifying whether they are worth these costs. Indeed, “only about a third of US companies report that they monitor whether their hiring practices lead to good employees; few of them do so carefully, and only a minority even track cost per hire and time to hire” (Cappelli, Reference Cappelli2019b: 50). Even if it were not so difficult to assess whether interviews are worth the costs with respect to the end posited by the traditional view (i.e., predicting performance and fit), two additional threats remain.
2. THE INTERVIEW PUZZLE: THE BEHAVIORAL AND ALGORITHMIC THREATS
2.1 The Behavioral Threat
The traditional conception of interviews—as a means to predict a candidate’s performance and fit in relation to a vacancy—hinges on an important assumption, namely, that performance and fit can be effectively predicted through interviewing. However, a considerable body of knowledge from the social sciences challenges this basic assumption and chronicles the poor track record of predicting performance and fit through interviews (Bishop & Trout, Reference Bishop and Trout2005; Bohnet, Reference Bohnet2016; Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019; McCarthy, Van Iddekinge, & Campion, Reference McCarthy, Van Iddekinge and Campion2010; Rivera, Reference Rivera2012). Specifically, although there is empirical evidence that highlights the outsized role interviews have in the hiring process (Billsberry, Reference Billsberry2007), interview-based hiring decisions have been found only to account for up to 10 percent of the variation in job performance (Conway, Jako, & Goodman, Reference Conway, Jako and Goodman1995). Additionally, biases pervade the process of predicting performance and fit through interviews, both in their unstructured and structured formats (Huffcutt, Roth, & McDaniel, Reference Huffcutt, Roth and McDaniel1996; McDaniel et al., Reference McDaniel, Whetzel, Schmidt and Maurer1994).
2.1.1 Unstructured Interviews
Unstructured interviews do not have a fixed format or a fixed set of questions, nor do they involve a fixed process for assessing the given responses (Schmidt & Hunter, Reference Schmidt and Hunter1998). During unstructured interviews, both the interviewer and the candidate investigate what seems most relevant at the time (Bohnet, Reference Bohnet2016). This process often produces an overall rating for each applicant “based on summary impressions and judgments” (Schmidt & Hunter, Reference Schmidt and Hunter1998: 267). Unstructured interviews are often assumed to be effective in concurrently assessing a range of dimensions associated with predicting performance and person–organization fit (Highhouse, Reference Highhouse2008).
However, recent research shows that unstructured interviews may not in fact aid in hiring decisions. This research maintains that unstructured interviews are riddled with biases and are often swayed by the whims of the interviewers (Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019). Specifically, this research suggests that unstructured interviews are ineffective because interviewers tend to overlook the limits of their knowledge (Kausel, Culbertson, & Madrid, Reference Kausel, Culbertson and Madrid2016), “decide on the fly” what questions to ask of which candidates and how to interpret responses (Cappelli, Reference Cappelli2019b: 50), place disproportionate emphasis on a few pieces of information (Dawes, Reference Dawes2001), and confirm their own existing preferences (Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019). Subsequently, they become increasingly confident in the accuracy of their decisions, even when irrelevant information is introduced (Bohnet, Reference Bohnet2016; Dawes, Reference Dawes2001).Footnote 5 One reason for interviewers’ overconfidence regarding their predictive abilities is that they cannot often ascertain whether, absent interviews, their predictions would turn out to be better or worse, and they would generally lack a large enough sample to deduce any statistically valid inferences (Bishop & Trout, Reference Bishop and Trout2005).
While managers more heavily value a given trait or ability if evaluated by unstructured interviews rather than by alternative methods (e.g., paper-and-pencil tests) (Lievens, Highhouse, & DeCorte, Reference Lievens, Highhouse and DeCorte2005), a long-standing body of empirical evidence shows that unstructured interviews are unhelpful with selection decisions. For example, in the context of medical school applications, DeVaul, Jervey, Chappell, Caver, Short, and O’Keefe (Reference DeVaul, Jervey, Chappell, Caver, Short and O’Keefe1987) compare the students who were initially accepted versus those who were rejected for medical school and find that only 28 percent of the difference between these groups is related to academic and demographic factors and that 72 percent is related to the admissions committee’s preferences developed through interviews. They report that when it comes to attrition and clinical performance during medical school and a subsequent year of postgraduate training, there are no significant differences between the accepted and the rejected groups, suggesting that interviews in this context are unhelpful to the decision-making process. In a similar fashion, Milstein, Wilkinson, Burrow, and Kessen (Reference Milstein, Wilkinson, Burrow and Kessen1981: 77) compare the performance of “a group of 24 applicants who were interviewed and accepted at the Yale University School of Medicine but went to other medical schools … with a group of 27 applicants who attended the same schools but had been rejected at Yale following an interview and committee deliberation.” In this context, too, the researchers find no statistically significant relationship between admission decisions and performance, again pointing to the inefficacy of interviews in aiding the achievement of the decision-making ends.Footnote 6
Medical school admissions decisions are, of course, not hiring decisions, but similar results are seen in hiring contexts. In a study of the hiring practices at elite professional services firms, Rivera (Reference Rivera2012) finds that employers often seek candidates who enjoy similar leisure pursuits and have shared experiences and self-presentation styles. In doing so, Rivera shows that unstructured interviews may be less about assessing knowledge, skills, and abilities and more about exercising biases through replicating ourselves, including, but not limited to, our culture, gender, and ethnicity, in hiring decisions. Finally, through a meta-analysis, Schmidt and Hunter (Reference Schmidt and Hunter1998) conclude that unstructured interviews are ineffective at predicting the performance of future employees.
Not only do we know that unstructured interviews are unhelpful in hiring decisions but there is also some empirical evidence that unstructured interviews reliably undermine those decisions (Bishop & Trout, Reference Bishop and Trout2005; DeVaul et al., Reference DeVaul, Jervey, Chappell, Caver, Short and O’Keefe1987; Eysenck, Reference Eysenck1954; Kausel et al., Reference Kausel, Culbertson and Madrid2016; Milstein et al., Reference Milstein, Wilkinson, Burrow and Kessen1981; Oskamp, Reference Oskamp1965; Wiesner & Cronshaw, Reference Wiesner and Cronshaw1988). For example, as far back as the middle of the past century, in a large-scale empirical study, Bloom and Brundage (Reference Bloom, Brundage and Stuit1947) found that the predictive gain in adding an interviewer’s assessment of a candidate’s experience, interest, and personality may well be negative. They specifically report that predictions based on test scores and interviewing were 30 percent worse than predictions based on test scores alone. More recently, Behroozi, Shirolkar, Barik, and Parnin (Reference Behroozi, Shirolkar, Barik and Parnin2020) have shown that even when tests are conducted in interview formats, such as “whiteboard technical interviews” common in software engineering, the mechanics and pressure of the interview context reduce the efficacy of the technical tests. This effect is heightened especially among minorities and other underrepresented groups (Munk, Reference Munk2021). Other recent research reports similar findings: for example, research on human judgment documents that when decision makers (e.g., hiring managers, admissions officers, parole boards) judge candidates based on a dossier and an unstructured interview, their decisions tend to be worse than decisions based on the dossier alone (Bishop & Trout, Reference Bishop and Trout2005). In a similar fashion, Dana, Dawes, and Peterson (Reference Dana, Dawes and Peterson2013) show that adding an unstructured interview to diagnostic information when making screening decisions yields less accurate outcomes than not using an unstructured interview at all. In this case, even though the decision makers may sense that they are extracting useful information from unstructured interviews, in reality, that information is not useful (Dana et al., Reference Dana, Dawes and Peterson2013).
2.1.2 Structured Interviews
Unlike the unstructured version, a structured interview involves a formal process that more systematically considers “rapport building, question sophistication, question consistency, probing, note taking, use of a panel of interviewers, and standardized evaluation” (Roulin, Bourdage, & Wingate, Reference Roulin, Bourdage and Wingate2019: 37) in hiring decisions. In this interview format, to predict good hires, an expert interviewer systematically and consistently poses the same set of validated questions about past performance to all candidates and immediately scores each answer based on a set of predetermined criteria relevant to the tasks of the job (Cappelli, Reference Cappelli2019b).
Although structured interviews are available and designed to standardize the hiring process and minimize subjectivity and bias (Bohnet, Reference Bohnet2016; Reskin & McBrier, Reference Reskin and McBrier2000), they are in effect not much more successful than unstructured interviews in aiding hiring decisions for at least three reasons. First, even though structured interviews, in theory, may be less biasedFootnote 7 and a better predictor of future job performanceFootnote 8 than their unstructured counterparts, they are not widely adopted in practice (König, Klehe, Berchtold, & Kleinmann, Reference König, Klehe, Berchtold and Kleinmann2010; Roulin et al., Reference Roulin, Bourdage and Wingate2019). The resistance to structuring interviews (Lievens et al., Reference Lievens, Highhouse and DeCorte2005; van der Zee, Bakker, & Bakker, Reference van der Zee, Bakker and Bakker2002) is driven by interviewers’ belief that a candidate’s character is “far too complex to be assessed by scores, ratings, and formulas” (Highhouse, Reference Highhouse2008: 339) that are predetermined in a structured format.
Second, even in cases when structured interviews are accepted, they are not well implemented for various reasons. For example, structured interviews tend to be more costly to construct (Schmidt & Hunter, Reference Schmidt and Hunter1998) in part because of the difficulties in designing and validating standardized questions and evaluation criteria (Bohnet, Reference Bohnet2016; Roulin et al., Reference Roulin, Bourdage and Wingate2019). Also, in reality, we rarely see structured interviews conducted by trained and experienced interviewers who manage to avoid having their idiosyncratic personalities distort the process (Roulin et al., Reference Roulin, Bourdage and Wingate2019). Even when structured interviews are conducted by trained and experienced interviewers, the process sometimes deviates to a semistructured or unstructured format. For instance, in conforming to a predetermined set of questions, the flow of conversation in a structured interview might feel stilted, awkward, or uncomfortable for both the interviewer and the candidate, thereby inadvertently shifting the interview process to a less structured format (Bohnet, Reference Bohnet2016).
Third, even when structured interviews are conducted by trained and experienced interviewers and the process does not deviate to an unstructured format, empirical evidence shows that structured interviews may not be systematic and free of bias because interviewers may used them to confirm their preexisting judgments rather than to evaluate the candidates—that is, a potential self-fulfilling prophecy (Dougherty, Turban, & Callender, Reference Dougherty, Turban and Callender1994). On the candidates’ side, there is also much room for introducing bias. For example, Stevens and Kristof (Reference Stevens and Kristof1995) show that applicants engage in significant impression management, even in structured interviews, thereby undermining the decision-making process. Furthermore, even when structured interviews are implemented properly, these issues and biases may not be eliminated: they may simply be shifted to the previous step of designing the interview and deciding its structure. Therefore not only are structured interviews rare but, even when they are used and properly implemented, they are afflicted with issues that complicate the evaluation of performance and fit. It is not surprising, then, that Cappelli (Reference Cappelli2019b: 56) argues that a structured interview is the “most difficult technique to get right.”
Although research shows that interviews can undermine the aims of the hiring process, interviews have remained a popular norm for employee selection for more than a hundred years (Buckley, Norris, & Wiese, Reference Buckley, Norris and Wiese2000; van der Zee et al., Reference van der Zee, Bakker and Bakker2002). They have remained popular not necessarily because the inefficacy of interviews is unknown. In fact, Rynes, Colbert, and Brown (Reference Rynes, Colbert and Brown2002) report that HR professionals appreciate the limitations of interviews. Still, hiring managers remain reluctant to outsource their judgment (Bohnet, Reference Bohnet2016).
2.2 The Algorithmic Threat
Interviews, both in their unstructured and structured formats, if not by design, in practice are ineffective at assessing fit or predicting future performance and create a significant opportunity for bias in hiring decisions (Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019; Rivera, Reference Rivera2012). However, proponents of the traditional view of interviewing might respond that there are no alternatives. But this assertion falls short in the face of the second threat the traditional view faces, that is, the algorithmic threat. That is, algorithms, even simple ones, in a number of domains, already are no worse (and are at times superior) at predicting the performance and fit of candidates than humans, even expert humans (Bishop & Trout, Reference Bishop and Trout2005; Cappelli, Reference Cappelli2020).
Algorithms can be an effective method for predicting future performance and fit primarily because the hiring challenge at its core is a prediction problem, and statistical algorithms are designed to take on and address prediction problems (Danieli, Hillis, & Luca, Reference Danieli, Hillis and Luca2016). For example, a simple statistical prediction rule (SPR) in a linear model is designed to predict a desired property P (e.g., future performance) based on a series of cues (e.g., education, experience, and past performance) such that P = w 1(c 1) + w 2(c 2) + w 3(c 3) + … + wn(cn), where cn and wn reflect the value and weightFootnote 9 of the nth cue (Bishop & Trout, Reference Bishop and Trout2005). Research shows that even this simple statistical algorithm is, at least in overall effect, better than humans in hiring predictions, in part because such a hiring algorithm is more consistent than humans (and cheaper, to boot). And, in practice, this algorithm can be better scaled and automated in a consistent way (Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019). Also, the increasing availability of good data, advances in statistical algorithms, and new capacities to analyze large-scale data have made this algorithmic route even more promising (Cappelli, Reference Cappelli2020).
Indeed, more advanced statistical hiring algorithms based on machine learning can be better than humans at predicting performance and fit because they are specifically designed to “adaptively use the data to decide how to trade off bias and variance to maximize out-of-sample prediction accuracy” (Chalfin et al., Reference Chalfin, Danieli, Hillis, Jelveh, Luca, Ludwig and Mullainathan2016: 124). In this respect, for example, Cowgill (Reference Cowgill2019) finds that more advanced statistical hiring algorithms based on machine learning better predict job performance than humans because they lack some of the biases from which humans suffer. Also, Chalfin et al. (Reference Chalfin, Danieli, Hillis, Jelveh, Luca, Ludwig and Mullainathan2016) find that, compared to the existing rank-ordering police hiring systems, machine learning algorithms that use sociodemographic attributes; prior behavior, including prior arrest records; and polygraph results would yield a 4.8 percent reduction in police shootings and physical and verbal abuse complaints.
In addition to the hiring domain, advanced statistical algorithms based on machine learning have been shown to be more effective than humans in a broader set of screening decisions where “a decision-maker must select one or more people from a larger pool on the basis of a prediction of an unknown outcome of interest” (Rambachan, Kleinberg, Ludwig, & Mullainathan, Reference Rambachan, Kleinberg, Ludwig and Mullainathan2020: 91). For example, Kleinberg, Lakkaraju, Leskovec, Ludwig, and Mullainathan (Reference Kleinberg, Lakkaraju, Leskovec, Ludwig and Mullainathan2018) show that machine learning algorithms exhibit better performance than judges in bail decisions because they incorporate fewer irrelevant perceptions of the defendant (e.g., demeanor) into their decisions. Also, Dobbie, Liberman, Paravisini, and Pathania (Reference Dobbie, Liberman, Paravisini and Pathania2018) illustrate that machine learning algorithms minimize bias against certain types of applicants (e.g., immigrants). Other related studies in lending find that machine learning algorithms are better at predicting default (Fuster, Plosser, Schnabl, & Vickery, Reference Fuster, Plosser, Schnabl and Vickery2019) and are less discriminatory compared to face-to-face lenders (Bartlett, Morse, Stanton, & Wallace, Reference Bartlett, Morse, Stanton and Wallace2019).
Critics of algorithmic decision-making in hiring (and elsewhere) raise at least two objections. The first objection pertains to the seeming ability of humans to pick up on soft, qualitative, or noncodifiable cues during interviews that are difficult to capture in algorithms (Gigerenzer, Reference Gigerenzer2007; Highhouse, Reference Highhouse2008). However, this is precisely where the research shows that there is a high likelihood and magnitude of bias clouding human decision-making. Indeed, the “speculation that humans armed with ‘extra’ qualitative evidence can outperform SPRs has been tested and has failed repeatedly” (Bishop & Trout, Reference Bishop and Trout2005: 33). Even if we grant that humans are skilled at inferring relevant information from subtle personality and intellect cues, as some research suggests (Gigerenzer, Reference Gigerenzer2007), statistical algorithms often simply pull on the same cues. While many algorithms tend to draw on codifiable cues (rather than bias-prone, noncodifiable cues), in contrast to humans, algorithms are more efficient and consistent, and they need not be managed with respect to their sense of self-esteem or self-importance (Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019).
The second objection regarding the algorithmic method of predicting future performance and assessing fit concerns fairness (Cappelli, Tambe, & Yakubovich, Reference Cappelli, Tambe, Yakubovich, Canals and Heukamp2020; Newman, Fast, & Harmon, Reference Newman, Fast and Harmon2020; Raisch & Krakowski, Reference Raisch and Krakowski2021; Tambe, Cappelli, & Yakubovich, Reference Tambe, Cappelli and Yakubovich2019). In this respect, although legitimate fairness concerns are associated with algorithmic predictions of human performance, research has shown that algorithms are often no worse than the alternative means of hiring, including using human judgment through interviews. For example, using data on teacher and police characteristics, Chalfin et al. (Reference Chalfin, Danieli, Hillis, Jelveh, Luca, Ludwig and Mullainathan2016) show that statistical algorithms predict future performance better than humans. Though there are indeed fairness concerns with algorithms, these concerns are prevalent in human decision-making too (Danieli et al., Reference Danieli, Hillis and Luca2016). Specifically, Danieli et al. grant the prevalence of fairness issues in algorithms but also highlight several comparably concerning psychological biases in human judgment. For example, in hiring contexts, humans engage in bracketing (i.e., overemphasizing subsets of choices over the universe of all options), that is, choosing the top candidate who was interviewed on a given day instead of the top candidate interviewed throughout the search process (Danieli et al., Reference Danieli, Hillis and Luca2016).Footnote 10 In addition, Li (Reference Li2020) summarizes research that shows how human judgment in hiring may discriminate based on race, religion, national origin, sex, sexual orientation, and age. Given this research, Cappelli (Reference Cappelli2020) warns us not to romanticize human judgment and to recognize “how disorganized most of our people management practices are now.” He notes, “At least algorithms treat everyone with the same attributes equally, albeit not necessarily fairly.”
Indeed, a significant portion of the algorithmic fairness issues arguably stems from human actions, as well as the lack of diversity in the humans who designed them (Li, Reference Li2020) and the types of data with which humans trained them (Cappelli, Reference Cappelli2020; De Cremer & De Schutter, Reference De Cremer and De Schutter2021). For example, Dastin (Reference Dastin2018) reports that Amazon’s recruiting algorithm was biased against women because it was trained to assess candidates by discovering patterns in submitted résumés over a ten-year time frame—most of those résumés were submitted by men (see also Cappelli, Reference Cappelli2019a).Footnote 11
As it turns out, recent research challenges the common assumption that biased data in the training stage of machine learning will lead to undesirable social outcomes. Specifically, Rambachan and Roth (Reference Rambachan and Roth2020) empirically examine the “bias in, bias out” assumption and highlight the conditions under which machine learning may reverse bias and ultimately prioritize groups that humans may have marginalized. More specifically, through mathematical modeling and simulation, they show that, unlike the bias generated by measurement errors caused by mislabeled data, the bias generated by sample selection may be flipped by machine learning such that the machine learning outcomes would favor groups that encountered discrimination in the training data.Footnote 12 Rambachan and Roth argue that the bias reversal occurs because members of groups that are underrepresented in the original training data, for example, women, that make the cut are typically ones that are statistically outstanding performers. As such, in subsequent rounds of learning, the algorithm is fed data in which women are overly positively correlated with being outstanding performers. Rambachan and Roth show that this can ultimately reverse the underrepresentation in the data that is due to human decision makers.
We have thus far considered two objections to using algorithms instead of interviews, and we’ve suggested that these objections fall short. Yet one might correctly point out that many more objections to algorithms have recently appeared in the algorithmic ethics literature (Birhane, Reference Birhane2021; Hunkenschroer & Luetge, Reference Hunkenschroer and Luetge2022; Martin, Reference Martin2019; Müller, Reference Müller and Zalta2021; Tasioulas, Reference Tasioulas2019; Tsamados et al., Reference Tsamados, Aggarwal, Cowls, Morley, Roberts, Taddeo and Floridi2022). For example, there are concerns related to algorithms systemically excluding certain individuals (Creel & Hellman, Reference Creel and Hellman2022), eliciting organizational monocultures (Kleinberg & Raghavan, Reference Kleinberg and Raghavan2021), or disproportionately harming marginalized groups (Birhane, Reference Birhane2021); worries related to the legitimacy and trustworthiness of algorithms (Benn & Lazar, Reference Benn and Lazar2022; Martin & Waldman, Reference Martin and Waldman2022; Tong, Jia, Luo, & Fang, Reference Tong, Jia, Luo and Fang2021) and the lack of explainability in the case of opaque algorithms (Anthony, Reference Anthony2021; Kim & Routledge, Reference Kim and Routledge2022; Lu, Lee, Kim, & Danks, Reference Lu, Lee, Kim and Danks2020; Rahman, Reference Rahman2021; Rudin, Reference Rudin2019; Selbst & Powles, Reference Selbst and Powles2017; Véliz, Prunkl, Phillips-Brown, & Lechterman, Reference Véliz, Prunkl, Phillips-Brown and Lechterman2021; Wachter, Mittelstadt, & Floridi, Reference Wachter, Mittelstadt and Floridi2017);Footnote 13 issues related to whether algorithms preclude us from taking people seriously as individuals (Lippert-Rasmussen, Reference Lippert-Rasmussen2011; Susser, Reference Susser, Jones and Mendieta2021); and concerns related to whether automated systems create responsibility or accountability gaps (Bhargava & Velasquez, Reference Bhargava and Velasquez2019; Danaher, Reference Danaher2016; Himmelreich, Reference Himmelreich2019; Nyholm, Reference Nyholm2018; Roff, Reference Roff, Allhoff, Evans and Henschke2013; Simpson & Müller, Reference Simpson and Müller2016; Sparrow, Reference Sparrow2007; Tigard, Reference Tigard2021), among other concerns (Bedi, Reference Bedi2021; Tasioulas, Reference Tasioulas2019; Tsamados et al., Reference Tsamados, Aggarwal, Cowls, Morley, Roberts, Taddeo and Floridi2022; Yam & Skorburg, Reference Yam and Skorburg2021). In short, there’s now a rich literature involving a wide range of concerns related to adopting algorithms in lieu of human decision makers (Hunkenschroer & Luetge, Reference Hunkenschroer and Luetge2022; Martin, Reference Martin2022; Müller, Reference Müller and Zalta2021; Tsamados et al., Reference Tsamados, Aggarwal, Cowls, Morley, Roberts, Taddeo and Floridi2022). And the thought might be put more forcefully: insofar as these two aforementioned concerns could be objections to using algorithms (and in turn objections to the force of the interview puzzle), many more objections—like the ones articulated in the algorithmic ethics literature—may succeed.Footnote 14
We grant the force of this concern. Taken together, the arguments developed in the algorithmic ethics literature constitute a powerful concern regarding using algorithms in lieu of human decision makers. Furthermore, to the extent that these objections to algorithms succeed, it would weaken the strength of the algorithmic threat (and, correspondingly, the force of the interview puzzle). However, for our ultimate aims, this does not concern us. This is because our broader project is not to defend algorithms—we do so in the context of the interview puzzle strictly for the sake of argument. Our ultimate aim is instead to argue that even if these wide-ranging objections to the use of algorithms fall short, there nevertheless remain independent moral considerations that tell against abdicating hiring choices to an algorithm. Crucially, the kinds of moral considerations on which we draw do not depend on certain bad outcomes that may arise due to algorithms. This is to say, even if algorithms were not systemically excluding individuals in arbitrary ways (Creel & Hellman, Reference Creel and Hellman2022), did not result in an organizational monoculture (Kleinberg & Raghavan, Reference Kleinberg and Raghavan2021), did not create responsibility gaps (Himmelreich, Reference Himmelreich2019; Johnson, Reference Johnson2015; Martin, Reference Martin2019; Matthias, Reference Matthias2004; Roff, Reference Roff, Allhoff, Evans and Henschke2013; Sparrow, Reference Sparrow2007), or did not elicit other morally untoward outcomes, there nevertheless remains an independent moral concern about firms abdicating their choices in the hiring domain to an algorithm. So, the argument we will now provide might be understood as providing further, independent grounds to resist using algorithms (at least in the context of hiring). Moreover, the arguments we offer do not hinge on certain bad outcomes arising due to using algorithms; as such, the force of our arguments remains, even if the bad outcomes associated with algorithms are ultimately engineered away.
2.3 Taking Stock of the Interview Puzzle
The behavioral and algorithmic threats present a significant twofold challenge and raise the interview puzzle for proponents of the traditional view of interviewing. To be sure, this does not mean that the traditional view is not, in part, correct. Finding high-performing candidates who fit the job requirements, as the traditional view posits, is plausibly an important end for firms to pursue. However, the behavioral and algorithmic threats, taken in conjunction, challenge whether interviews are a suitable means toward that end. Crucially, if interviews are only about this end, then the interview puzzle remains and threatens to undermine our justification for conducting interviews. We will now argue, however, that there is more to be said on behalf of interviews than the traditional view accounts for.
Before proceeding, we offer a brief clarification about an assumption we make in the next section: we treat the interview process as equivalent to a hiring process with human decision makers. But, strictly speaking, this assumption is not always correct. Hiring processes with human decision makers can occur without interviews, because interviews are not the only available basis for selection. For example, tests or work samples might instead be used. However, tests and work samples are apt in a much narrower range of positions. Moreover, as HRM textbooks note, “interviews are one of the most common methods used for selection” (Mathis et al., Reference Mathis, Jackson, Valentine and Meglich2016: 259), and “interviews continue to be the primary method companies use to evaluate applicants” (Mondy & Martocchio, Reference Mondy and Martocchio2016: 165). In fact, “while not all employers use tests, it would be very unusual for a manager not to interview a prospective employee” (Dessler, Reference Dessler2020: 192). For these reasons, we use “the interview process” interchangeably with “hiring process conducted by human decision makers.” At the end of section 4, we briefly discuss the implications of relaxing this assumption.
3. THE VALUE OF CHOICE THEORY OF INTERVIEWS
The interview puzzle can be dissolved once we recognize that interviews play additional roles beyond predicting performance and fit. For this reason, even if the behavioral and algorithmic threats undermine the plausibility of interviews serving as a means toward the end of securing an employee who fits the role’s and organization’s needs, we need not conclude that the practice of interviewing is unjustified or something that ought to be abandoned: this is because interviews are a source of other kinds of value and are not exclusively a means for predicting performance and fit.
To be clear, on the view we develop, we do not challenge the importance of the end posited by the traditional view (i.e., the end of hiring an employee who fits the role’s and organization’s needs); rather, we argue that additional kinds of value are implicated in the practice of interviewing. Thus we offer a pluralistic theory of interviewing and argue that once we recognize the wider range of contenders for the kinds of value generated through interviewing, we can see that abandoning interviews would risk the loss of certain important kinds of value.
To understand the additional kinds of value implicated in the practice of interviews, we draw on philosopher T. M. Scanlon’s (Reference Scanlon and McMurrin1988, Reference Scanlon1998) account of the value of choice. Scanlon’s (Reference Scanlon2013: 12) account “begins from the fact that people often have good reason to want what happens in their lives to depend on the choices they make, that is, on how they respond when presented with the alternatives.” His work on the value of choice has been significant for debates and fields of inquiry as wide-ranging as paternalism (Cornell, Reference Cornell2015), bioethics (Walker, Reference Walker2022), the freedom and moral responsibility debate (Duus-Otterström, Reference Duus-Otterström2011; Fischer, Reference Fischer2008), and contract theory (Dagan, Reference Dagan2019).
On the value of choice account, at least three different kinds of value can be generated when making a choice: instrumental, representative, and symbolic. The first is the instrumental value of a choice: if I am the one who makes the choice, I might make it more likely that I realize some end than were I not given the opportunity to choose. So, for example, if I’m a prospective car buyer and am given the choice over what color I want for my car, my making this choice realizes a certain instrumental value: of making it more likely that the car will satisfy my aesthetic preferences (in contrast to, for example, were the dealership to choose the color of the car on my behalf or were the color to be selected using a random color generator). So, the instrumental value in a choice is realized when it makes it more likely that a desired end of a prospective decision maker is achieved.
The second is the representative value of choice: this is the value that is generated when my making the choice alters the meaning of the outcome of the choice—crucially, this value is realized even if my making the choice is instrumentally worse at achieving certain ends than an alternative method of decision-making (e.g., an algorithm, a coin flip, deference to an expert). For example, it’s important that I am the one who chooses a gift for my partner, not because I’m more likely to satisfy their preferences than they are (were they to choose the gift themselves), but rather because there is value in the fact that I was the one who chose it; in choosing the gift, I expressed myself (e.g., my desires, beliefs, and attitudes toward my significant other) through that act. More simply, representative value relates to how the outcome of the choice takes on a different meaning in virtue of who makes the choice.
The third is the symbolic value of choice: this is the value associated with certain choices reflecting that one is a competent member of the moral community who has standing that is “normally accorded an adult member of the society” (Scanlon, Reference Scanlon1998: 253). For example, if I, as an adult, were not permitted to choose my bedtime, this would be demeaning and infantilizing. This is so even if a sleep specialist choosing my bedtime would result in outcomes better for my circadian rhythm and other physiological markers. My being able to choose reflects the judgment that I am a “competent, independent adult” (Scanlon, Reference Scanlon1998: 253). This is the value that is risked when one is denied the opportunity to make certain choices, ones that, in a given social context, are choices that “people are normally expected to make … for themselves” (Scanlon, Reference Scanlon1998: 253).
These are the three candidates for the value generated through making a choice. The first is instrumental, and the second two are noninstrumental sources of value. This may not exhaust the candidates for the kinds of value generated in making a choice, but it does taxonomize three important kinds of value that are generated in making a choice. Thus, if a choice is abdicated, (at least) these three kinds of value are at risk and are thus potential candidates for the value that would be lost.
Returning to the context of interviewing, when firms conduct interviews, they are making choices about whom to employ. So, let’s now turn to how the value of choice account bears on interviewing. We will discuss each sort of value generated through choice—instrumental, representative, and symbolic—in turn.
The first is the instrumental value of choice. Securing instrumental value is the chief value with which the traditional view of interviewing is concerned. The thought goes as follows: interviewing realizes the instrumental value to the extent that it helps the firm predict a candidate’s performance and fit. Those who are inclined to preserve interviews, on the basis of the traditional view of interviewing, might expect that the instrumental value of choice realized in interviewing—helping a firm better predict a candidate’s performance and fit—is what both explains why we interview and also what justifies its costs.
Yet the instrumental value of interviewing is precisely what is called into question by the interview puzzle. Interviewing does not excel at generating the purported instrumental value that it is thought to elicit (namely, predicting future performance and fit). So, if the sole kind of value that could be generated through interviewing is instrumental value, then the grounds for the practice are undermined. But as the value of choice account tells us, there is a wider range of contenders for the kinds of value generated in making a choice. The critical oversight of the traditional view is its failure to recognize that the value generated through interviewing is not entirely conditional on the instrumental value of choice, given that there can be noninstrumental value generated through the choice.
This brings us to the second potential value—and one overlooked by the traditional view—that is realized through interviews: the representative value of choice. As Scanlon (Reference Scanlon1998, 253) points out, we value and want certain choices to “result from and hence to reflect [our] own taste, imagination, and powers of discrimination and analysis.” In the interview context, we may value the fact that we are the ones choosing with whom we work, and there is value lost (i.e., representative value) when we abdicate that choice, even if our choosing does not as effectively realize the ends of predicting performance and fit as an algorithm. An algorithm might be better at predicting which romantic partner we should date, whom we should befriend, or which university we should attend—while this all might be correct, abdicating these choices and deferring to an algorithm would result in us losing something of value: representative value. Choosing to whom we relate in the workplace is a way “to see features of ourselves manifested in actions and their results” (Scanlon, Reference Scanlon1998: 252). The representative value of a choice is the value that arises in virtue of the choice taking on a different meaning: because of both the fact of who makes the choice and the choice representing or expressing the person’s judgments, desires, and attitudes.
The third value generated through interviewing, and another oversight of the traditional view of interviewing, is the symbolic value of choice. Scanlon (Reference Scanlon2019: 4) points out, “If it is generally held in one’s society that it is appropriate for people in one’s position to make certain decisions for themselves, then failing to make such a decision for oneself or being denied the opportunity to make it, can be embarrassing, or even humiliating.” Thus the symbolic value of choice is what is lost when a person for whom it would be appropriate (in a given social context) to make a certain decision is precluded from making that decision. For example, to the extent that workplace norms in a given society involve members of an organization typically having a choice in their future colleagues—people with whom they would collaborate but also, in some cases, those whom they would befriend or with whom they would commiserate and form community (Casciaro, Reference Casciaro, Brass and Borgatti2019; Estlund, Reference Estlund2003; Porter, Woo, Allen, & Keith, Reference Porter, Woo, Allen and Keith2019)—through interviewing, depriving people of that choice may result in a loss of symbolic value.Footnote 15 Relatedly, a certain prestige and status are implicated in making certain choices (including selecting future colleagues through interviewing) that figure into the symbolic value of choice; this is especially vivid, for example, when alumni of a university are involved in on-campus recruiting at their alma mater (Binder, Davis, & Bloom, Reference Binder, Davis and Bloom2015). This prestige and status that are implicated in the symbolic value of choice are also part of what would be lost were firms to forsake interviews. Crucially, substituting interviews with algorithms can result in a loss of symbolic value even if, as a matter of fact, an algorithm may arrive at a better assessment of a candidate’s expected performance and fit.Footnote 16
Although the representative value of choice and the symbolic value of choice may seem similar, especially because, as Scanlon (Reference Scanlon1998: 253) puts it, “representative and symbolic value may be difficult to distinguish in some cases,” they are not the same. Symbolic value concerns how making certain choices reflects one’s standing, whereas representative value concerns how the meaning of a certain outcome depends on who is making the choice that elicited the outcome. Despite these differences, both are kinds of noninstrumental value, and neither depends on the instrumental effectiveness of the choice with respect to some end (Aristotle, Reference Ostwald1962; Donaldson, Reference Donaldson2021; Donaldson & Walsh, Reference Donaldson and Walsh2015; Gehman, Treviño, & Garud, Reference Gehman, Treviño and Garud2013; Kant, Reference Kant, Gregor and Timmermann2012; O’Neill, Reference O’Neill1992; Zimmerman & Bradley, Reference Zimmerman, Bradley and Zalta2019).
Our interviewing practices can be vindicated once we recognize that the choice involved in the interview process can realize both representative and symbolic value. The key point is that “the reasons people have for wanting outcomes to be dependent on their choices often have to do with the significance that this dependence itself has for them, not merely with its efficacy in promoting outcomes that are desirable on other grounds” (Scanlon, Reference Scanlon1998: 253). And the fact that representative and symbolic value are threatened when abdicating the choice involved in interviewing a candidate—the choice of whom to relate to in the workplace—generates pro tanto moral reason to preserve interviews as an organizational practice. Crucially, the representative and symbolic value undergirding our interview practices is not imperiled by the behavioral or algorithmic threats.
In other words, once we recognize the broader range of contenders for the kinds of value generated through interviewing, we can see that the behavioral and algorithmic threats only undermine part of the potential value in interviewing—its instrumental value. But we still have pro tanto moral reason to continue the practice of interviewing, given the noninstrumental value—representative and symbolic value—that may be lost were we to abandon the practice.
4. CLARIFICATIONS AND OBJECTIONS
We now turn our attention to a few clarifications and some potential objections. First, it’s worth keeping in mind that even the noninstrumental values in a choice do not always tell in favor of preserving, rather than abdicating, a choice. For example, with respect to representative value, we might prefer, in some circumstances, for our choices not to reflect our judgments, desires, and attitudes. If one’s organization is considering hiring one’s close friend, one might prefer to have the “question of who will get a certain job (whether it will be my friend or some well-qualified stranger) not depend on how I respond when presented with the choice: I want it to be clear that the outcome does not reflect my judgment of their respective merits or my balancing of the competing claims of merit and loyalty” (Scanlon, Reference Scanlon1998: 252). In other words, in circumstances that might present a conflict of interest, for example, there might be reasons related to representative value that tell against preserving the choice.
Second, the value of choice is not simply about having a greater number of options from which to select. This is to say, the value of choice generates reasons that “count in favor of ‘having a choice,’ but for reasons of all three kinds having more choice (over a wider range of alternatives) is not always better than less. Being faced with a wider range of alternatives may simply be distracting, and there are some alternatives it would be better not to have” (Scanlon, Reference Scanlon2019: 4). So, in the context of interviewing, we remain agnostic about how the value of choice is affected by having more candidates from whom to select.
Third, one might doubt whether symbolic value would in fact be risked were we to forgo interviews. The point might be pressed as follows: because many (or even most) employees are not involved in hiring decisions, it is not clear that symbolic value would be lost (or that the failure to be involved in the interview process would be demeaning).Footnote 17 We grant that symbolic value may not be risked in many instances of abdicating a choice. But this clarification points the way to an advantage of our value of choice account: its contextual sensitivity. As Scanlon (Reference Scanlon1998: 253) notes, a key point with respect to whether symbolic value is risked in a given situation is whether the situation is one “in which people are normally expected to make choices of a certain sort for themselves.” Ascertaining whether there is such an expectation in place in a given hiring context and, in turn, whether symbolic value would be lost will depend on certain sociological facts pertaining to the expectations in the given workplace and the norms governing that workplace culture, field, or industry.Footnote 18 This means that there is an important role for empiricists to play in ascertaining the workplace contexts, fields, or industries in which symbolic value is risked to a greater or lesser extent. And in contexts in which the strength of the norms associated with choosing members of one’s organization are weaker, the reasons provided by the symbolic value of choice would be correspondingly weaker.
Fourth, one might raise the following question: what about organizations that outsource hiring to an external head-hunting firm? On our view, such an approach would, in effect, be morally akin to abdicating the choice to an algorithm, with respect to the value of choice. That said, there might be other sorts of considerations—for example, the various objections discussed in the algorithmic ethics literature mentioned earlier—that make relying on algorithms morally worse than abdicating the choice to an external head-hunting firm. Still, it is quite right that the value of choice-related considerations would be morally akin. But this need not mean that there is no role for external head-hunting firms at all. This is because the concerns with respect to the value of choice primarily arise insofar as the firm defers to the judgment of the external head-hunting firm. This, however, does not preclude soliciting advice about hiring decisions from HR consultants or head-hunting firms. Notably, in the context of algorithms, deference to the algorithm is much more likely given that many algorithms are opaque. Moreover, failing to defer to the judgments of the algorithm—that is, picking and choosing on a case-by-case basis when to follow its prescriptions—drastically undercuts its overall instrumental benefits (Bishop & Trout, Reference Bishop and Trout2005).
Fifth, perhaps, all things considered, in some instances the costs of interviewing may be too burdensome and a firm might be forced to forgo the practice. Perhaps, in other instances, the importance of finding the right person is far too weighty—for example, selecting an airline pilot—for a human to make the decision if an algorithm would do so more effectively. But even in these cases, were we to abandon interviewing for a different selection method (e.g., an algorithm), it’s worth keeping in mind that there may still be something of normative significance lost, that is, representative or symbolic value.Footnote 19
How might these trade-offs be managed? One potential approach might be as follows: suppose one regards instrumental value to be of much greater significance in the business realm than the sorts of noninstrumental value to which we’ve drawn attention. In such a case, a hybrid approach might be considered. Such an approach might involve conducting the initial screening with an algorithm and leaving the ultimate decision to a member of the organization. This may allow for reducing the potential trade-offs between the instrumental and the noninstrumental sources of value of choice.Footnote 20
In other words, our view is not that, in instances when an algorithm is vastly superior at achieving a given end, firms should pursue the drastically less instrumentally effective approach. As Scanlon (Reference Scanlon2019: 4) notes, the various reasons for the value of choice “can conflict with reasons of other kinds, particularly with instrumental reasons.” So, we are not claiming that firms must always conduct interviews, instead of using algorithms. Nor are we claiming that the instrumental considerations are not of moral significance—in some instances, they may very well be of overriding moral importance.Footnote 21 Rather, our point is that multiple kinds of value can be generated through the practice of interviewing—including sources of value that may generate conflicting reasons—and that an adequate theory of interviewing should not overlook this fact. If we are to abdicate interviews in a given context, we should do so in full view of the kinds of value that are risked.Footnote 22
Sixth, it’s now worth revisiting the assumption we articulated at the end of section 2: treating the interview process as equivalent to a hiring process with human decision makers. As we acknowledged, this assumption is not always, strictly speaking, correct. A hiring process—including one in which humans are making the decisions—might not involve interviews at all; perhaps the hiring process involves choosing on the basis of work samples or tests.
So, when we relax this assumption, what follows? Our view would still imply that abdicating the hiring process entirely to algorithms would risk the various values of choice. However, our value of choice account does not entail a particular mode of choosing for a human decision maker—whether interviews, work samples, or tests. With respect to the narrow range of professions where work samples or tests can aptly be implemented, our value of choice arguments are neutral between choosing such an approach and interviewing (but of course, the value of choice account is not neutral between either of these routes and abdicating the choice to an algorithm).Footnote 23 Interviews are a way—the most prominent and common way, and the way most broadly applicable across a range of positions—for us to choose the members of our organizations, but they are indeed not the only way to choose in the hiring process.
To summarize, we have offered an account of some heretofore underappreciated normative dimensions of a widespread business practice, namely, interviewing. Our view helps address some of the challenges to which the traditional conception of interviewing succumbs. The traditional view has difficulty explaining why interviews persist and justifying why we should not abandon them, given their costs, our poor ability to predict performance and fit, and the presence of algorithmic alternatives. Our value of choice theory of interviewing both explains why interviews persist and justifies why there are grounds not to abandon the practice: interviews play an important normative function by securing noninstrumental sources of value in hiring.
5. FUTURE AVENUES OF RESEARCH
Our value of choice account of interviewing suggests several new avenues of research. First, a significant body of research in employment ethics primarily emphasizes the ethics of how employers ought to treat their employees (Arnold, Reference Arnold, Brenkert and Beauchamp2010; Barry, Reference Barry2007; Bhargava, Reference Bhargava2020; Brennan, Reference Brennan2019; McCall, Reference McCall2003; Werhane, Radin, & Bowie, Reference Werhane, Radin and Bowie2004), but there is much less, apart from discrimination-related issues, surrounding the ethics of what is owed to prospective employees. Our work highlights the significance of a range of understudied issues to explore in this domain. Although some have explored the question of what is owed to former employees of a firm (Kim, Reference Kim2014), what, if at all, is owed to potential employees, such as candidates who participate in interviews? Other such issues include, for example, the ethics of exploding offers, accepting applications from candidates that will never be considered, and alerting candidates of rejection. On the side of the candidate, issues include the ethics of feigning enthusiasm for an interview, pursuing an interview merely to solicit an external offer for negotiation leverage, and holding on to offers that one is confident one will not accept.
Second, our account of interviewing points the way to questions related to what may make employment relationships meaningful (Robertson, O’Reilly, & Hannah, Reference Robertson, O’Reilly and Hannah2020). Some contributors to the future of work scholarly conversation have argued that employers owe it to their employees to provide meaningful work (Bowie, Reference Bowie1998; Kim & Scheller-Wolf, Reference Kim and Scheller-Wolf2019; Michaelson, Reference Michaelson2021; Veltman, Reference Veltman2016).Footnote 24 By attending to the broader range of values associated with interviewing, managers may have the opportunity to make work and employment relationships more meaningful (Bartel, Wrzesniewski, & Wiesenfeld, Reference Bartel, Wrzesniewski and Wiesenfeld2012; Freeman, Harrison, Wicks, Parmar, & De Colle, Reference Freeman, Harrison, Wicks, Parmar and De Colle2010; Rosso, Dekas, & Wrzesniewski, Reference Rosso, Dekas and Wrzesniewski2010). So, an important question to address will be how the process of being selected for a position (i.e., through an interview or through selection by way of an algorithm) can contribute to preserving or promoting the meaningfulness of work (Carton, Reference Carton2018; Grant, Reference Grant2012; Jiang, Reference Jiang2021; Kim, Sezer, Schroeder, Risen, Gino, & Norton, Reference Kim, Sezer, Schroeder, Risen, Gino and Norton2021; Rauch & Ansari, Reference Rauch and Ansari2022).
Third, there is a sense in which using algorithms in hiring decisions deepens the informational asymmetry between candidates and employers (Curchod, Patriotta, Cohen, & Neysen, Reference Curchod, Patriotta, Cohen and Neysen2020; Yam & Skorburg, Reference Yam and Skorburg2021: 614). Switching to algorithms in hiring may prevent candidates from developing a better understanding of their prospective colleagues and the prospective employer’s workplace culture and norms. On the other hand, if an interview was conducted, the candidate might have acquired this sort of valuable information, even if fallibly. Future scholars should explore the public policy implications of forgoing interviews, especially in jurisdictions with employment at will. The symmetrical right to exit is sometimes discussed as a potential justification for employment at will (Bhargava & Young, Reference Bhargava and Young2022; Hirschman, Reference Hirschman1970; Maitland, Reference Maitland1989; Taylor, Reference Taylor2017). But when candidates and employers enter the employment relationship on starkly asymmetric informational grounds (Caulfield, Reference Caulfield2021), it’s worth exploring whether the fact of both parties having a right to exit the relationship loses some of its justificatory force with respect to employment at will and considering whether supplementary regulatory constraints would be in order.
6. CONCLUSION
The traditional view of interviewing espoused by both practitioners and management scholars alike holds that interviews are conducted—despite the steep costs associated with the process—to predict a candidate’s performance and fit in relation to a vacancy. We argue that the traditional view faces a twofold threat: the behavioral and the algorithmic threats. The behavioral threat arises in virtue of a large body of behavioral evidence that points to us being poor predictors of future performance and bad judges of fit. The algorithmic threat arises in virtue of algorithms already being superior predictors of performance and fit than us in a number of domains, including the hiring domain.
If the traditional view of interviewing captures all there is to interviewing, then the justification for conducting interviews is undermined by the behavioral and algorithmic threats. However, we argue that the practice of interviewing can be vindicated once we recognize that there are a broader range of contenders for the kinds of value that can be realized through interviewing—crucially, some of these kinds of noninstrumental value that are realized through interviewing remain insulated from the behavioral and algorithmic threats. In short, we argue that even if algorithms are better predictors of performance and fit than us, it does not follow that we ought to abandon our interview practices: this is because important kinds of noninstrumental value are generated through interviewing that could be lost were we to forgo the practice.
Acknowledgments
The authors contributed equally. For helpful comments, feedback, or conversation, we thank Alan Strudler, Ben Bronner, Carson Young, Esther Sackett, Gui Carvalho, JR Keller, Julian Dreiman, Matthew Bidwell, Matthew Caulfield, Peter Cappelli, Robert Prentice, Samuel Mortimer, Sonu Bedi, Suneal Bedi, Thomas Choate, Thomas Donaldson, and audiences at the 2019 Summer Stakeholder Seminar at the University of Virginia’s Darden School of Business, the 2021 Society for Business Ethics meeting, Georgetown Institute for the Study of Markets and Ethics, and the Dartmouth Ethics Institute. We also are grateful to associate editor Jeffrey Moriarty and three anonymous reviewers for their helpful feedback.
Vikram R. Bhargava (vrb@gwu.edu, corresponding author) is an assistant professor of strategic management and public policy at the George Washington University School of Business. He received a joint PhD from the University of Pennsylvania’s Wharton School and Department of Philosophy.
Pooria Assadi is an assistant professor of management and organizations in the College of Business at California State University, Sacramento. He received his PhD in strategic management from Simon Fraser University’s Beedie School of Business and was a visiting scholar at the University of Pennsylvania’s Wharton School.