Introduction
An abundance of information on effective assessment and development is readily available via a variety of outlets. For example, there exists peer-reviewed articles that describe the latest theory and research, best-selling books, expert reports on industry trends, and social media testimonials of executives with decades of practical experience (Ployhart & Bartunek, Reference Ployhart and Bartunek2019). Of course, each of these sources is not equally valuable; and even the most valuable sources have limitations. Thus, it becomes the job of responsible consumers of assessment and development knowledge to sift through the vast array of sources to identify unbiased information applicable to their purpose (Lowman & Cooper, Reference Lowman and Cooper2018). For practitioners, their purpose may involve selecting appropriate techniques to assess and develop human talent; whereas scholars may theorize about and investigate the techniques practitioners implement.
In doing so, knowledge consumers may struggle to gain accurate knowledge about assessment and development, especially when many variations of a practice exist (e.g., among various performance indices, evaluation schedules, and feedback approaches; Murphy et al., Reference Murphy, Cleveland and Hanscom2019). Some practitioners might resort to off-the-shelf products marketed as “scientifically validated” and “evidence-based” with limited access to the supporting evidence. Even in the most robust scientific efforts, academics may lack first-hand experience and understanding of organizational realities, or the feasibility of the recommendations they make; leading to a lack of clarity about what exactly should inform future research and theory (Rynes, Reference Rynes-Weller and Kozlowksi2012; Rynes et al., Reference Rynes, Bartunek and Daft2001). Gaining the wealth of knowledge available is ultimately throttled by the lack of accessibility of different sources of information, conflicting information, and insufficient detail to infer generalizability. To aid researchers and practitioners in addressing this challenge, we provide a three-part epistemology aimed at gaining the most complete information about assessment and development practices. Rather than presenting new information, in this paper we present and support a framework for processing information. Further, we explain how this framework for acquiring and processing information, or epistemology, offers greater clarity or a new perspective on known issues.
Ways to gain knowledge: epistemology
For centuries, philosophers of science have studied the basic question of how one comes to know something. To know includes being aware of information and its accuracy, and using multiple processes and sources to gather it. Epistemology, a sub-field of philosophy, describes how knowledge is acquired and justified (Audi, Reference Audi2011). An epistemology can range from accepting doctrines from authority, to conducting empirical research, to gaining insight from personal experience. In this paper, we apply an epistemology to assessment and development to demonstrate its value in gathering the most complete knowledge on various practices within these areas.
Audi (Reference Audi2011) explains that one aspect of epistemology is examining the justification of a belief, asserting that we are justified if we have some basis for the belief. There are several bases for believing we know something. Our proposed epistemology consists of three bases for knowledge: theory, empirical research, and observations of practice. Theory is a systematically organized set of knowledge, including assumptions, principles, and relationships among concepts (Sutton & Staw, Reference Sutton and Staw1995). Empirical research involves methods of gathering and interpreting data to uncover or confirm knowledge (APA, 2015). Practice allows for knowledge to amass from one’s own or awareness of others’ direct observations of effective and ineffective techniques (Rupp & Beal, Reference Rupp and Beal2007). These varying ways in which we gain knowledge differ by provided certainty about verified facts, connections with other ways of knowing, and reliance on internal states of awareness or information external to the knowledge seeker (Shieber, Reference Shieber2019).
Our epistemology relies on a combination of more certain pieces of interconnected information derived from coherent theoretical propositions, rigorously conducted empirical research evidence, and effective organizational practice. It represents a divergence from the traditional scientific assumption that valid knowledge is gained only through systematic, deductive research (McLelland, Reference McLelland2006). It is also more amenable to the assessment and development fields, given their inherent blend of science and practice (Benjamin & Baker, Reference Benjamin and Baker2000).
We argue that this approach is important, especially because the field of personnel assessment and decisions has, historically, eschewed insights from practice (Rynes et al., Reference Rynes, Bartunek and Daft2001; Rynes & Bartunek, Reference Rynes and Bartunek2017) while simultaneously publishing “theory” that is often little more than intuitive judgement propped up by post-hoc threading together of past theory developed in the same manner (Rupp et al., Reference Rupp, Shapiro, Folger, Skarlicki and Shao2017). We argue that bringing together our identified three sources of knowledge is key for moving the field forward by illuminating what we know about assessment and development, and the strength of that evidence, through the integration of multiple knowledge sources.
A three-part epistemology
Our proposed epistemology (Figure 1) provides several contributions. First, it provides a unique and organized process for identifying what we do and do not know about assessment and development. Evaluating and integrating information from theory, empirical research, and practice provides an encompassing appraisal of the state of knowledge in an area. Thus, we place the greatest emphasis on the intersection of all three sources. Second, it provides a means for evaluating assessment and development practices based on converging evidence. Finally, our approach calls for investigating relationships between assessment/development practices and their role in the larger contextual system, which research has been criticized of ignoring (Jackson & Shuler, Reference Jackson and Schuler1995; Johns, Reference Johns1993; Parrigon et al., Reference Parrigon, Woo, Tay and Wang2017).
In the following sections, we describe (a) the three focal segments of our epistemology (see Figure 1); (b) how knowledge is acquired for each segment; (c) each segment’s strengths and limitations; and (d) how each segment contributes to knowledge. Then, we examine what we can gain from a lack of agreement or even conflict between and within each knowledge base. Finally, we showcase convergence among ways of knowing about assessment (specifically assessment centers) and development (specifically training).
Theory
Theory is a set of reasoned beliefs about relationships among variables. Bacharach (Reference Bacharach1989) states “a theory is a statement of relations among concepts within a set of boundary assumptions and constraints” (p. 496). Theories can impel action and force us to go further than opinions to substantiate our beliefs by asking how we formed ideas, how things work, and how they might be done differently (Nealon & Giroux, Reference Nealon and Giroux2012).
Advantages and disadvantages of theory
A theory can summarize knowledge accumulated over time across a body of research (Suddaby, Reference Suddaby2014), and succinctly capture results of systematic interventions in practice (Locke, Reference Locke2007). However, relying on theory alone can lead to a neglect of experimentation and observation (Cucina et al., Reference Cucina, Hayes, Walmsley and Martin2014). At times, hypothesized statements may be nothing more than “received doctrine” or “academic intuition.” Furthermore, a theory can bias and impede progress if it continues to be relied upon without substantiation (Hambrick, Reference Hambrick2007). A theory is strengthened if supported by research and field observations of fruitful or ineffective practice.
Asking specific questions about a theory can assist in determining its strength: How widely adopted is the theory? Are there other competing perspectives? How significant are areas of disagreement? Has the theory been tested through research and applied in practice? Strong theories have been empirically tested multiple times with replicated findings (Cucina et al., Reference Cucina, Hayes, Walmsley and Martin2014; Hambrick, Reference Hambrick2007; Woo et al., Reference Woo, O’Boyle and Spector2017). Untested theories risk devolving into pseudotheories, or “explanations based on conjecture, personal opinion, and limited findings that cannot be called true theories” (Woo et al., Reference Woo, O’Boyle and Spector2017, p. 257). Thus, the strength of a theory partly depends on whether empirical research supports the phenomenon of interest.
Empirical research
Empirical research involves gathering data to confirm theoretical relationships or identify new relationships. An emphasis on empirical research as a basis of knowledge is compatible with the recent emphasis on “evidence-based management” (EBM; Barends et al., Reference Barends, Rousseau and Briner2014; Barends & Rousseau, Reference Barends and Rousseau2018), which refers to a variety of methods on which to base business decisions via empirically gathered information. EBM can assist in raising and deriving the issue of what constitutes evidence, the strength of evidence, and how academics and practitioners can improve the quality of their evidence (Rousseau & Gunia, Reference Rousseau and Gunia2016).
Approaches to empirical research
Deductive research (i.e., the basis of empirical theory testing) is useful when relevant theories exist. However, both inductive and abductive research also fulfill important roles in identifying and explaining phenomena (Locke, Reference Locke2007). Woo et al. (Reference Woo, O’Boyle and Spector2017) note that inductive research facilitates exploration of emerging questions that may not yet have theory to support them. Similarly, abductive research makes observations based on data, but also seeks to provide an explanation for the observations (Folger & Stein, Reference Folger and Stein2017). Abductive reasoning typically begins with an incomplete set of observations and proceeds to the likeliest possible explanation for the set. Although organizational research has historically prioritized theory-driven deductive research, inductive and abductive research valuably contribute to the advancement of scientific knowledge through the generation and explanation of new research questions.
Complexifying empirical research
The inclusion of multiple variables can increase the strength of empirical research by modeling more complex and realistic relationships between them (Berry & Sanders, Reference Berry and Sanders2018). Also, research can be simultaneously conducted at multiple levels of analysis that are hierarchically nested within each other, such as employees within teams, to better account for the complex reality of relationships in practice (Klein & Kozlowski, Reference Klein and Kozlowski2000; Zhou et al., Reference Zhou, Song, Alterman, Liu, Wang, Humphrey and LeBreton2019). Studies with nested data that do not account for multiple levels of analysis may misinterpret results. For instance, there may appear to be no relationship between training and employee performance until accounting for the effect of team membership, which might direct investigation into team-level variables influencing results (Snijders & Bosker, Reference Snijders and Bosker2011).
In addition to traditional research methods, recent innovations in big-data methodologies, including artificial intelligence and machine learning, provide new possibilities in prediction (Tonidandel et al., Reference Tonidandel, King and Cortina2016). These methods often rely on “organic data,” or large datasets that emerge from ongoing information collection processes (e.g., HRM information systems; Groves, Reference Groves2011). Organic data often arise from processes designed to tackle practical problems (McAbee et al., Reference McAbee, Landis and Burke2017), and typically constitute data that would be impossible or prohibitively difficult to collect via traditional research methods (Woo et al., Reference Woo, Tay and Proctor2020). With increased access to such data and computational power to analyze larger datasets, more complex predictive models can be tested with statistical rigor. However, many researchers caution against “dustbowl empiricism” or atheoretical approaches often consistent with big-data methods, in that conclusions are driven by data collected under uncontrolled conditions. Further, decision-making models “backed” by big-data algorithms have the potential to demonstrate bias (e.g., in assessment for selection, Dastin, Reference Dastin2018), as well as a potential inability to out-perform traditional methods (Hickman et al., Reference Hickman, Tay and Woo2019).
Meta-analysis
Another type of research, meta-analysis, if carried out in accordance with professional standards (see APA, 2020), can overcome some of the limitations of individual empirical research studies (Huffcut, Reference Huffcutt and Rogelberg2004). Meta-analyses acknowledge the concept of sampling error in research and treat each individual study as a sample from the population, providing a more precise estimate of the actual relationship between variables, assuming most of the studies used to provide the correlations for the meta-analysis are not themselves highly flawed (LeBreton et al., Reference LeBreton, Scherer and James2014).
Meta-analysis overcomes problems of relying on the results of a single study to make valuable theoretical contributions by examining the full collected set of evidence available. For example, Schmidt and Hunter (Reference Schmidt and Hunter1998; Hunter & Schmidt, Reference Hunter and Schmidt2004) produced guidelines for meta-analytic methods that have served as a theoretical foundation for determining the relative predictive ability of various assessment techniques; and offered one of the first comparative analyses of predictor constructs and methods commonly used in personnel selection. An example of the application of meta-analysis is Ones et al. (Reference Ones, Viswesvaran and Schmidt1993), who used the method to test the generalizability of integrity test validities, finding that they can predict several types of counterproductive work behavior, extending previous theory suggesting integrity as a predictor of solely employee theft.
Importantly though, meta-analyses must be continually updated. Sackett et al. (Reference Sackett, Zhang, Berry and Lievens2022), after identifying that Schmidt and Hunter (Reference Schmidt and Hunter1998) had applied overcorrections for restriction of range, conducted a new meta-analysis using updated methods and research studies as input. They revealed meta-analytic estimates of predictor criterion-related validity that were substantially smaller than those reported by Schmidt and Hunter. Similarly, Van Iddekinge et al. (Reference Van Iddekinge, Roth, Raymark and Odle-Dusseau2012) conducted an update to the Ones et al. (Reference Ones, Viswesvaran and Schmidt1993) meta-analysis and found that the predictive validity of the integrity test was much smaller than previously specified. In sum, meta-analyses contribute to knowledge through their summation of individual empirical research studies; however, they also must be updated over time to reflect knowledge gained through new research.
Advantages and disadvantages of empirical research
The advantages of empirical research involve the ability to infer statistical and practical significance of observed relationships, and the confidence inherent in these conclusions. However, there are limitations to what can be learned through individual empirical research. By necessity, the number of variables examined in any one study is limited, and thus cannot completely reflect the complexity of real organizations. Field research in organizations can be expensive and intrusive, and thus, replication is seldom carried out. In laboratory studies, it can be difficult to operationalize a variable to mimic work-related constructs. Moreover, research samples may not always match the organizational populations to which the research seeks to generalize. A tradeoff exists between the control a particular methodology allows, and the generalizability of the results (Cook & Campbell, Reference Cook and Campbell1979). If the research aim is to generalize to a variety of contexts, a strong research study may prioritize generalizability at the expense of control. Alternatively, if the goal of the research is to determine causal relationships, strong research will prioritize control at the expense of generalizability. All research is not equal in terms of methodological rigor, generalizability, level of control, and replicability. Indicators of strong research include clear and specific research questions, a rigorous research design, clear interpretations of results, transparent reporting, and clear methodology that allows for successful replication (Grand et al., Reference Grand, Rogelberg, Allen, Landis, Reynolds, Scott, Tonidandel and Truxillo2018).
Practice
Practice informs us through direct experience and observations as well as the experience of others (e.g., case studies, testimonials; Audi, Reference Audi2011). It may also come in the form of practitioner reports, such as benchmarking surveys assessing the popularity of various assessment and development practices. These accounts can be quite engaging and persuasive when they come from highly experienced and respected sources, but they can be context-specific and limited in generalizability.
Commonly, observations of practice in organizations lead to consensus around “best practices” that other organizations should implement. A “best practice” is a practice that has functioned successfully in one organization and has the potential to elicit success in another (Serrat, Reference Serrat and Serrat2017). Its “best” status may be defined by a constellation of factors, including whether the practice is backed by organizational data and how those data were collected. Best practices are often supported by empirical research and aligned with relevant theory. However, assessment and development practices are not required to undergo peer-review processes (like published research) to be implemented; thus, one must evaluate biases that may exist (e.g., conflicts of interest; Lowman & Cooper, Reference Lowman and Cooper2018). Altogether, knowledge can be acquired through understanding, identifying, and disseminating organizational best practices.
Induction
Practice can uniquely serve as a way of knowing through induction, which can occur during observations of practice. Induction begins with observations of phenomena which can amass into general premises and is in contrast to deduction, in which general premises are used to formulate specific hypotheses. General premises that inductively arise from observations of practice can be formulated into formal theory (Locke, Reference Locke2007), but such formalization is not a requirement for induction to provide contextualized information relevant for working in the field.
In this way, induction occurring during observations of practice can generate applicable knowledge for those working in the organization. For example, trait activation theory suggests that assessors’ ratings on different dimensions are more likely to converge when the observed behaviors relate to the same underlying trait (Lievens et al., Reference Lievens, Chasteen, Day and Christiansen2006); however, an organization might find similar levels of convergence in ratings on dimensions which have similar and different underlying traits. In this instance, detailed observation of the assessment center in practice can help practitioners understand factors impacting the convergence of ratings for dimensions influenced by dissimilar underlying traits. Then, such factors can be incorporated into future research designs, and subsequently, theory.
Traditionally, organizational scholarship has disparaged making conclusions about organizational practices based on case studies, instead favoring deductive methods, where hypotheses are formed and tested empirically based on prior theory and empirical research (Platt, Reference Platt1964; Popper, Reference Popper2003). However, the value of more inductive approaches, where knowledge is acquired through such observation has been shown to be appropriate as well in some contexts, and at times even ground-breaking (Locke, Reference Locke2007). Indeed, it was through such inductive approaches that key advances in psychological knowledge have come about (e.g., social-cognitive theory; Bandura, Reference Bandura1986; goal setting theory; Locke & Latham, Reference Locke and Latham1990). Woo et al. (Reference Woo, O’Boyle and Spector2017) advocate for inductive knowledge acquisition when the knowledge seeker begins with a clear purpose, exploits available data, remains flexible and thinks outside of the box, engages in collaborative information-sharing, seeks to replicate and cross-validate conclusions, and reports the observation collection process transparently.Footnote 1 As such, observations of practice are essential to the acquisition of knowledge, and can be used systematically to inform assessment and development theory, empirical research, and subsequent practice.
Advantages and disadvantages of practice
Practice contributes to understanding in a unique way as compared to theory and empirical research. Individuals and organizations interpret information differently based on their perception of the information, oftentimes informed by their social context (Berger & Luckmann, Reference Berger and Luckmann1966). Consequently, theories and research may suggest that a phenomenon will result in a certain outcome in an organization, but in reality, the predicted outcome does not occur. In this way, observations of practices in organizations can contribute a way of knowing. At the same time, unsuccessful organizational practices can also provide fodder for new scientific advances. Strengthening partnerships among organizations and scientists can lead to research and theoretical advances that inform best practices and ultimately increase the robustness of the science (Grand et al., Reference Grand, Rogelberg, Allen, Landis, Reynolds, Scott, Tonidandel and Truxillo2018). On the other hand, knowledge that arises from observations of practice may not generalize to dissimilar organizational contexts and may lack explanatory power. In sum, observations of practice can provide contextualized knowledge, but is important to understand that this knowledge may not generalize to all contexts and the mechanisms which explain the phenomenon may be unclear.
The value of monitoring all segments
Broadly, the goal of practitioners is to make informed decisions based on observations and accessible research, and the goal of scholars is to use the extant literature alongside practitioners’ observations to strengthen what we know; in turn, allowing practitioners to make better, more informed decisions. For scholars, this means seeking out critical evidence of what works in practice and conducting research across laboratory and field settings to address gaps, explain inconsistent findings, and develop sound theory applicable to real-world organizations. The premise of this article is that a field can have the most certainty about a piece of knowledge if evidence from theory, empirical research, and practice converge – an implicit truth that, by making explicit, we believe can provide a clearer path forward. Lacking this level of support, converging evidence from two segments may provide partial guidance. Importantly, no single segment alone can provide complete certainty.
Table 1 provides a more detailed account of sources of assessment center and training knowledge from theory, empirical research, and practice. Specifically, it provides a summary of “how we know what we know” in each content area, the primary sources where the knowledge originated, and the key secondary sources that integrate and propagate knowledge from the primary sources. We recommend that those seeking to learn more about a specific content area should evaluate knowledge offered from theory, empirical research, and practice using a format similar to Table 1, to inform the research or practice they conduct.
Space limitations limit us from walking through a complete epistemology of these areas. However, in the following sections, we demonstrate how using our epistemology provides a framework for processing what is known in the areas of assessment centers and training—which together, account for a wide swath of assessment and development practices. In doing so, we do not intend to present new information on these topics, but rather, we aim to illustrate how our epistemology provides a framework for organizing exactly what is and is not known in a given area, as well as perspective that highlights how to best move forward in advancing new knowledge.
Theory and empirical research converge (but not Practice)
When theory and research converge with little evidence that practice has followed, it could mean undocumented practice is underway, or that sound theory and research are not well understood or appreciated in practice and scholars need to improve translational communication (Banks & Murphy, Reference Banks and Murphy1985). The alignment of strong theory and research provides a foundation for continued research, but it may not prove as useful when it is not utilized in practice.
For example, theory and empirical research on assessment centers show that assessors can only reliably differentiate three to five performance dimensions (Gaugler & Thornton, Reference Gaugler and Thornton1989). This aligns with information processing theory’s assertion that individuals can only hold a limited amount of information in their working memory without making errors (Lachman & Butterfield, Reference Lachman, Lachman and Butterfield1979). Therefore, assessors who rate fewer performance dimensions are more accurate in their ratings (Thornton et al., Reference Thornton, Rupp and Hoffman2015). Despite this evidence, operational ACs often assess far more dimensions (i.e., 10–12, and even up to 20; Eurich et al., Reference Eurich, Krause, Cigularov and Thornton2009), demonstrating how organizations have yet to adapt practice supported by theory and research.
A similar example can be found in the training literature. Bell and Kozlowski (Reference Bell and Kozlowski2008) advanced a theoretical model and empirical validation explaining how individual differences and training design interact to affect learning and the transfer of knowledge back to the job. Their model has been substantiated and built upon by many researchers (e.g., Blume et al., Reference Blume, Ford, Surface and Olenick2019), demonstrating the importance of taking steps before, during, and after training to support transfer. Nonetheless, a lack of transfer continues to be a key issue in practice regardless of how effective training programs are at facilitating learning; and organizations commonly choose not to incorporate design characteristics to maximize transfer (Velada et al., Reference Velada, Caetano, Michel, Lyons and Kavanagh2007). There are many reasons for this, including a lack of time, accountability, evaluation efforts, and knowledge of best practices (Hutchins et al., Reference Hutchins, Burke and Berthelsen2010; Longnecker, Reference Longnecker2004). Further, research insights are not always applicable in practice (Baldwin et al., Reference Baldwin, Ford and Blume2017), emphasizing a need for translational work with actionable research findings to improve (e.g., training) outcomes, including tools and guidelines that consider practical constraints (e.g., to supporting training transfer; Hughes et al., Reference Hughes, Zajac, Spencer and Salas2018).
In essence, these examples illustrate the well-known “science-practice gap” (Rynes et al., Reference Rynes, Bartunek and Daft2001; Tkachenko et al., Reference Tkachenko, Hahn and Peterson2017). Identifying these types of divides can signal the need to forge partnerships among academics and practitioners to develop knowledge, and to disseminate that knowledge in accessible and actionable formats. A classic example of this lies in the history of the assessment center method. Assessment centers, having been widely successful in the selection of intelligence and military personnel during World War II (MacKinnon, Reference MacKinnon, Moses and Byham1977), led AT&T to carry out a longitudinal study of the ability for assessment center ratings to predict the career progression of managers. The work involved a partnership between researchers and consultants, including Douglas Bray and William Byham, respectively. After noting the strong predictive validity of the method, results and techniques were shared openly, not only with the scholarly community through the publication of Bray and Grant’s (Reference Bray and Grant1966) article in Psychological Monographs, but also Byham’s (Reference Byham1970) practice-focused article published in Harvard Business Review. Indeed, it was this sort of transparent science-practice collaboration that led to a huge surge in demand for assessment center consultation, culminating in the founding of the firm Development Dimensions International (Thornton & Rupp, Reference Thornton and Rupp2006).
Theory and practice converge (but not Research)
Next are instances where theory and practice converge but empirical research has either been limited or unreported. One possible explanation is that practitioners are satisfied with a practice and see no need for supportive research (or have no reason to be concerned about unsupportive research). Also, practitioners may not have had opportunities to engage in meaningful research. Another scenario would be that local, proprietary research has been conducted, but not presented publicly. Indeed, research can lag when organizations pioneer into new areas (e.g., big data; Tonidandel et al., Reference Tonidandel, King and Cortina2016). In these cases, theory may explain successful practice while (public facing) empirical validation awaits.
There exist multiple assessment center practices supported by theory but not empirically tested. For example, theory and practice suggest that motivation, cognitive understanding, and experience are related to assessee performance (e.g., Guidry et al., Reference Guidry, Rupp, Lanik, Fetzer and Tuzinsk2013). However, empirical research has yet to explore these issues in depth. Similarly, the use of virtual ACs has greatly increased within practice, with a number of conceptual and theory-based papers written about their use (e.g., Lanik, Reference Lanik2011; Reynolds & Rupp, Reference Reynolds, Rupp, Scott and Reynolds2010; Rupp et al., Reference Rupp, Gibbons and Snyder2008), despite limited reliability and validity evidence to support this modality (for related examples, see Arthur et al., Reference Arthur, Doverspike, Munoz, Taylor and Carr2014; Illingworth et al., Reference Illingworth, Morelli, Scott and Boyd2015; Morelli et al., Reference Morelli, Mahan and Illingworth2014). Research is needed that investigates these theory-backed practices to establish their validity and generalizability across uses (selection, development), organizational levels (entry, management), and industries.
Similarly, the topic of training sustainability, including the need for refresher training (Lazarra et al., Reference Lazzara, Benishek, Hughes, Zajac, Spencer, Heyne, Rogers and Salas2021), illustrates this segment of the epistemology. Both theory on skill decay over time and observations of decay in practice suggest that refresher training is often necessary to maintain an appropriate level of expertise. Meta-analyses suggest that skills decay with nonuse (Arthur et al., Reference Arthur, Bennett, Stanush and McNelly1998). However, research lends limited insight demonstrating the specifics of what works and when refresher training might be necessary (Lazarra et al., Reference Lazzara, Benishek, Hughes, Zajac, Spencer, Heyne, Rogers and Salas2021). Needs vary across organizations and skill types, as well as knowledge domains, new developments, and amount of practice in the performance context. Some skills may need to be refreshed less often because they are practiced regularly, while others that are important but used infrequently (e.g., emergency procedures) likely need refreshing sooner. Research is needed that integrates theories such as those on skill decay (Arthur et al., Reference Arthur, Bennett, Stanush and McNelly1998) to create and test frameworks for training sustainability.
Research and practice converge (but not Theory)
When empirical research and practice converge without theoretical explanation, the argument could be made that theory does not matter or explanations are not needed. However, this “black box” or “dustbowl” empiricism can be problematic when issues arise in practice and there is no clear understanding of the mechanisms driving the effectiveness of various assessment and development techniques (Pam, Reference Pam2020). For instance, multiple innovations within assessment and development have been put into practice while lacking explanatory theory (Lievens & Thornton, Reference Lievens, Thornton, Evers, Anderson and Voskuijl2005); including speed assessments (Herde & Lievens, Reference Herde and Lievens2020), automatic scoring of job candidate essays and interviews (Campion et al., Reference Campion, Campion, Campion and Reider2016; Chen et al., Reference Chen, Niu and Chen2022), and asynchronous assessment (Lukacik et al., Reference Lukacik, Bourdage and Roulin2022). These modern practices may seem more resource-efficient than traditional methods. However, we lack theory necessary for understanding the mechanisms underlying their efficacy.
Similarly, the training field lacks an overall, cohesive theory that details the role of training within larger talent development and organizational effectiveness frameworks. For instance, considering the various ways employees develop expertise, it is currently unclear how formal and informal learning efforts might interact and how different channels might be leveraged to maximize benefits. Informal, on-the-job learning has become a key learning pathway in practice (making up 70-90% of all learning activities) and is effective at improving learning and performance (ATD, 2020; Cerasoli et al., Reference Cerasoli, Alliger, Donsbach, Mathieu, Tannenbaum and Orvis2018). Traditionally, organizations have not placed much attention on informal learning, but employees are now seeking these opportunities and may benefit from structure (ATD, 2020; Cerasoli et al., Reference Cerasoli, Alliger, Donsbach, Mathieu, Tannenbaum and Orvis2018). Ideally, this structure would be grounded in theory and backed by research on what employees can gain from informal learning in conjunction with training and other talent management interventions.
Conclusion
In this paper, we introduced a unique three-pronged epistemology for determining what is known about assessment and development. We then showed several topics within the assessment center and training areas where two sources of knowledge converge, but one source is lacking. Shockingly, we were not able to locate an example of complete epistemological convergence (i.e., the grey ABC segment of Figure 1). This should serve as a wake-up call to all those working in the assessment and development space, that we must do a better job working together to collect credible insights from theory, research, and practice, followed by the careful and systematic assessment of convergence to come to conclusions on what we confidently “know” about any given area.
This epistemological approach could be applied to other complex workplace interventions, as well. For example, pay and benefit plans vary considerably, and differing sources (i.e., theory, research, and practice) recommend different options (e.g., hourly vs. salary pay, incentives, profit sharing, early retirement, cafeteria-style benefit plans; Martocchio, Reference Martocchio2020). Likewise, this epistemological approach could advance knowledge involving interventions to meet current and future workplace challenges, such as hybrid work arrangements; creating organizational cultures that value diversity and inclusion; and programs that support employee mental health and well-being.
Declaration of Interest
None.