Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-28T05:47:48.839Z Has data issue: false hasContentIssue false

A brighter vision of the potential of open science for benefiting practice: A ManyOrgs proposal

Published online by Cambridge University Press:  27 January 2023

Christopher M. Castille*
Affiliation:
Nicholls State University, Thibodaux, USA
Tine Köhler
Affiliation:
University of Melbourne, Parkville, Australia
Ernest H. O’Boyle
Affiliation:
Indiana University, Bloomington, IN, USA
*
*Corresponding author: Email: christopher.castille@nicholls.edu
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022) are correct in pointing out key challenges that open science principles and practices present to us as an applied discipline. Our commentary on Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022) focuses on three points they make. First, Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022) recognize the need for greater collaboration between academics and practitioners in adapting open science practices to applied settings. Such collaboration is needed to avoid harming both our practical relevance and our scientific integrity. Second, Guzzo et al. raise meaningful concerns about incentivizing open science practices, which they frame as harming applied research. Third, they acknowledge open science discussions on the need for replication. Interestingly, in contrast to open science advocates, they urge our stakeholders to prioritize conceptual replication (new approach to testing the same idea) over direct replication (same materials and methods, new observations), providing big data research as an exemplar of conceptual replication research.

In essence, Guzzo et al. frame open science as the enemy of practice. We wonder if this framing is helpful for making our science stronger and better. Additionally, their recommendations—relying on big data, incentivizing conceptual replication, and the selective use of pre-registration—do not address the deeper issues motivating the open science movement, namely that publication and outcome reporting bias are pervasive (e.g., Banks et al., Reference Banks, O’Boyle, Pollack, White, Batchelor, Whelpley, Abston, Bennett and Adkins2016) and traced to a key problem: insufficient resources. How have other sciences addressed this problem? Physicists overcame resource problems by pooling resources, giving rise to powerful tools such as the James Webb Telescope and Large Hadron Collider. Such tools could not have been created without the collaboration of many scientists and institutional bodies pooling and sharing what they can. Similarly, psychologists have pooled limited resources to overcome longtime shortcomings facing our discipline (for a review, see Uhlmann et al., Reference Uhlmann, Ebersole, Chartier, Errington, Kidwell, Lai, McCarthy, Riegelman, Silberzahn and Nosek2019).

What is needed is a compelling vehicle for pooling our resources. How might leveraging open science practices promote greater collaboration between academics and practitioners? How could we incentivize the thoughtful uptake and application of open science practice among academics and practitioners? How do we incentivize replications? With our commentary, we add to Guzzo et al.’s piece by addressing these three questions. Specifically, we draw inspiration from an innovation emerging from the open science movement—crowdsourced multisite replication research (Moshontz et al., Reference Moshontz, Campbell, Ebersole, IJzerman, Urry, Forscher, Grahe, McCarthy, Musser, Antfolk, Castille, Evans, Fiedler, Flake, Forero, Janssen, Keene, Protzko, Aczel and Chartier2018; Uhlmann et al., Reference Uhlmann, Ebersole, Chartier, Errington, Kidwell, Lai, McCarthy, Riegelman, Silberzahn and Nosek2019). There has been little discussion about leveraging crowdsourced multisite replication research in field settings of interest to industrial-organizational (I-O) psychology (i.e., organizations). We hope to prompt this discussion by proposing that I-O psychologists form a crowdsourced multisite replication initiative that services field settings. We outline one possible initiative (we call it “ManyOrgs”) and explain how it offers a pragmatic (if challenging) solution to problems facing our field.

What is crowdsourced multisite replication research?

Crowdsourced multisite replication research combines two ideas: (a) crowdsourcing research and (b) multisite replication. Whereas crowdsourcing research involves leveraging a “crowd” for all stages of the research process, multisite replication involves investigators across multiple sites collaborating, often in the form of pooling resources (e.g., materials, code, design choices, access to participants), to answer a research question of mutual interest.

Crowdsourced multisite replication initiatives have grown in popularity throughout the sciences, particularly in the wake of the replication crisis. Examples include projects aimed at leveraging crowds to identify distant star clusters, improving the prediction of surviving breast cancer treatments, and identifying which findings from psychology would replicate from large multisite replications (see Uhlmann et al., Reference Uhlmann, Ebersole, Chartier, Errington, Kidwell, Lai, McCarthy, Riegelman, Silberzahn and Nosek2019). Multisite collaboration initiatives are a solution to key methodological challenges facing any scientific discipline: pooling limited resources to achieve sufficiently high statistical power for testing hypotheses, assessing the generalizability and replicability of effects, promoting the uptake of open science practices via collaboration, and promoting inclusion and diversity within the research community (Moshontz et al., Reference Moshontz, Campbell, Ebersole, IJzerman, Urry, Forscher, Grahe, McCarthy, Musser, Antfolk, Castille, Evans, Fiedler, Flake, Forero, Janssen, Keene, Protzko, Aczel and Chartier2018; Uhlmann et al., Reference Uhlmann, Ebersole, Chartier, Errington, Kidwell, Lai, McCarthy, Riegelman, Silberzahn and Nosek2019). Such problems—low power tests, unclear generalizability and replicability, and low uptake of open science practices—are clearly present in our discipline (see Banks et al., Reference Banks, O’Boyle, Pollack, White, Batchelor, Whelpley, Abston, Bennett and Adkins2016) but largely ignored in Guzzo et al.’s discussion of the open science.

In the context of I-O psychology, a mechanism for crowdsourced multisite replication research would involve making the needs of practitioners, researchers, academics, and the parties impacted by our work—such as employees, managers, and the organizations within which they work—from anywhere on the planet, open and accessible. Contributors could be given open access into every stage of the research process, from generating ideas and solutions to problems, to having those ideas vetted openly via peer review, or having designs critiqued by parties we serve to making research products (e.g., published manuscripts) (for more practices, see Uhlmann et al., Reference Uhlmann, Ebersole, Chartier, Errington, Kidwell, Lai, McCarthy, Riegelman, Silberzahn and Nosek2019). Obviously, ethical and legal restrictions must be maintained (e.g., maintaining confidentiality and anonymity of individuals and organizations contributing data), but ultimately parties would pool their resources to answer questions of mutual interest.

Although crowdsourcing research may seem radical for an applied psychological discipline, it is worth pointing out that the Society for Industrial and Organizational Psychology (SIOP) has a variation of such research already in place. Consider the annual machine learning competitions where organizations share anonymized data publicly for analysis by multiple teams. Teams (usually doctoral students) use a variety of approaches and make their analytical tactics available online (usually via GitHub). The winning team is announced at the annual conference. No doubt the hosting firm gains insight into their own data (if only a different perspective) and open science practices facilitate sharing insights publicly.

What we are proposing is the creation of a crowdsourced multisite replication initiative for I-O psychology field research. Such a vehicle could be a proverbial “Craig’s List” for matching academics with practitioners, maximizing our collective ability to address applied research goals by finding collaborators with valuable resources (e.g., access to employees, organizational settings, expertise, access to novel analytical approaches). However, when leveraging the multisite replication element, the initiative we have in mind goes far beyond helping to foster small team collaboration. It should lead to big team science in I-O psychology and the corresponding benefits that come from it, including achieving a sufficiently high level of statistical power to test our hypotheses, assessing the generalizability and replicability of phenomena across sites, spurring the thoughtful uptake of open science practices via collaboration, and promoting inclusion and diversity within our research community (Moshontz et al., Reference Moshontz, Campbell, Ebersole, IJzerman, Urry, Forscher, Grahe, McCarthy, Musser, Antfolk, Castille, Evans, Fiedler, Flake, Forero, Janssen, Keene, Protzko, Aczel and Chartier2018; Uhlmann et al., Reference Uhlmann, Ebersole, Chartier, Errington, Kidwell, Lai, McCarthy, Riegelman, Silberzahn and Nosek2019). Such an initiative should yield rich research products, such as templates for conducting research, resources for learning advanced skills, mentoring and career development opportunities, as well as recommendations for future research.

To make our proposal concrete, we provide an outline for such a crowdsourced multisite replication initiative that we tentatively call “ManyOrgs.” We chose this name partly because this initiative should represent both crowdsourcing research needs among multiple organizations and facilitate multisite replication research across many organizational settings. The name also pays homage to the Many Lab studies sponsored by Nosek and colleagues.

The proposed ManyOrgs initiative

“ManyOrgs” refers broadly to the network of members, such as practitioners in organizational settings and academics, who are interested in conducting crowdsourced multisite replication research in organizational settings, which we term “ManyOrg studies.” To clarify the difference between these two terms, consider that as a crowdsourced multisite replication initiative, ManyOrg network members will use the ManyOrg network to voice research needs openly. We fully expect that independent or small team collaborations sharing research needs will often collaborate. Although spurring small team collaborations is certainly desired from our standpoint, the broader aim of ManyOrgs is to promote big team science in I-O psychology: identify salient research needs of network members that are best answered by collaboration among a large number of network members. Those research needs that are salient among network members are ripe for executing a multisite replication effort, one where a large number of network members get robust answer by pooling their resources. Topical areas (e.g., the great resignation) seem ripe for study. We suspect that such studies will rapidly expand uptake of open science practice because, as we will explain, such practices are required to execute such a study.

There are a few ways in which our ManyOrgs initiative goes beyond Guzzo et al. Guzzo et al. call for conceptual replications (same idea, different methods) to be incentivized rather than direct replications (same idea, same methods). Unfortunately, research suggests that when conceptual replication efforts yield null or opposing findings, they are often suppressed in the publication process (see Landy et al., Reference Landy, Jia, Ding, Viganola, Tierney, Dreber, Johannesson, Pfeiffer, Ebersole, Gronau, Ly, van den Bergh, Marsman, Derks, Wagenmakers, Proctor, Bartels, Bauman, Brady and Uhlmann2020). In other words, incentivizing conceptual replications could further aggravate the very problems Guzzo et al. aim to address via thoughtful application of open science principles and practices.

We submit that replication research need not necessarily prioritize conceptual replication over direct replication when viewed from a crowdsourcing lens: either could be pursued. For instance, consider a crowdsourcing study led by Landy et al. (Reference Landy, Jia, Ding, Viganola, Tierney, Dreber, Johannesson, Pfeiffer, Ebersole, Gronau, Ly, van den Bergh, Marsman, Derks, Wagenmakers, Proctor, Bartels, Bauman, Brady and Uhlmann2020) into questions related to negotiation, moral judgment, and implicit cognition. Independent research teams volunteered different designs for addressing key questions in these domains and participants were randomly assigned to these designs. Heterogeneity in the results was explained largely by the hypothesis in question; still, different teams investigating the same hypothesis often came to the exact opposite conclusions depending on the design that was used. It would be interesting to create an analogous study within the I-O psychology domain where conceptual replications are crowdsourced via independent organizational settings. We could see how variation in design and analytic choices impact effect size estimates, more effectively revealing true sources of variation. ManyOrgs could facilitate such an effort.

Alternatively, we believe that there is a more valuable replication route that is missed by Guzzo et al. Conceptual replications can contain all of the flaws of the original design (e.g., small homogenous sample). Recent research by Köhler and Cortina (Reference Köhler and Cortina2021) argues that such conceptual replications could very well be regressive, adding nothing to our knowledge base. They urge researchers to focus greater efforts on conducting constructive replications, which involve testing the same hypothesis or model in a way that includes all of the virtues of previous approaches while addressing at least one key methodological shortcoming. Köhler and Cortina further distinguish between three kinds of constructive replications: (a) incremental advancements (reflecting only one key methodological improvement over the original study), (b) substantial (more than one advancement), or (c) comprehensive (all key methodological shortcomings of the original study are addressed). In our view, ManyOrgs would provide an opportunity for researchers to execute substantial constructive replications that benefit both our science and practice. As a crowdsourcing initiative that draws on open peer review, ManyOrg study proposals can be shaped by network members into multisite constructive replication studies that address a salient need of many network members. Prospective meta-analysis can feature prominently in these proposals, promising to provide effect size estimates that are more precise than any single individual participating study as well as a more robust analysis of moderating effects than alternative approaches (e.g., small team collaboration, conceptual replication efforts).

Executing a ManyOrgs study would require participating organizations to share access to materials (e.g., items, measures) and procedures (e.g., time lags) in order to participate in a broader multisite constructive replication effort. As such, greater transparency and documentation of research workflows will be necessary for a study to be effectively executed. Although requiring a great deal of overall effort, the opportunity to contribute to such a study, being rare and highly impactful for our science and practice, could be difficult to ignore, especially when authorship could be granted for making even small contributions (e.g., building analytical code, pre-registering study hypotheses, clarifying what makes a study a constructive replication, sharing data). In other words, even small contributions—small wins—collectively add up, making for meaningful contributions to our science and practice. All contributors can receive authorship, and the CRediT taxonomy (see https://casrai.org/credit/) can be consulted to provide clear guidance for clarifying how someone contributed. If necessary, nondisclosure agreements, which Guzzo et al. and other open science advocates (e.g., Uhlmann et al., 2019) advise, could be used to facilitate insight sharing across sites without forcing contributors to adopt the full repertoire of open science practices (e.g., data sharing). In short, ManyOrg studies can greatly spur the adoption of open science practices among academics and practitioners.

Lastly, we must emphasize that as a crowdsourced multisite replication initiative, ManyOrgs promotes inclusion and diversity within the research community (Moshontz et al., Reference Moshontz, Campbell, Ebersole, IJzerman, Urry, Forscher, Grahe, McCarthy, Musser, Antfolk, Castille, Evans, Fiedler, Flake, Forero, Janssen, Keene, Protzko, Aczel and Chartier2018). In principle, anyone who can put together a study proposal that is relevant for our discipline should be able to contribute to and participate in ManyOrgs. Creating robust and generalizable knowledge requires collaboration across different organizations, potentially vast geographic and cultural distances, as well as sampling from under-represented settings and populations (e.g., blue collar jobs, Africa-based organizations, illiterate workers, small and micro businesses, nonprofit organizations). In principle, any work setting is relevant to our science. Scholars in our network with access to such organizations can contribute to a broader ManyOrgs effort, either by proposing a study or helping to gather data to address a salient and widely shared need. We should also note that promoting such inclusivity and diversity in our research community is not addressed by Guzzo et al.’s proposal.

Conclusion

We want to commend Guzzo et al. for raising the topic of how best to implement open science principles and practices as an applied discipline. Although we see merits to their proposal (and included them in our own as we saw fit), we sought to provide a brighter vision of what open science can mean for our field via our proposal for ManyOrgs. We can collectively choose to view open science as a challenge to our field and find small ways to contribute to make our broader scientific enterprise more robust (Castille et al., Reference Castille, Kreamer, Albritton, Banks and Rogelberg2022). We hope our commentary about forming a crowdsourced multisite replication initiative that services field settings prompts deeper discussion about ways to further enhance the quality of our science.

Acknowledgments

Christopher M. Castille: conceptualization, writing—original draft, writing—review and editing. Tine Köhler: writing—review and editing. Ernest H. O’Boyle: writing—review and editing.

References

Banks, G. C., O’Boyle, E. H. Jr, Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., Abston, K. A., Bennett, A. A., & Adkins, C. L. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42(1), 520. https://doi.org/10.1177/0149206315619011 CrossRefGoogle Scholar
Castille, C. M., Kreamer, L. M., Albritton, B. H., Banks, G. C., & Rogelberg, S. G. (2022). The open science challenge: Adopt one practice that enacts widely shared values. Journal of Business and Psychology, 37, 459467. https://doi.org/10.1007/s10869-022-09806-2 CrossRefGoogle Scholar
Guzzo, R., Schneider, B., & Nalbantian, H. (2022) Open science, closed doors: The perils and potential of open science for research in practice. Industrial and Organizational Psychology: Perspectives on Science and Practice, 15(4), 495515.Google Scholar
Köhler, T., & Cortina, J. M. (2021). Play it again, Sam! An analysis of constructive replication in the organizational sciences. Journal of Management, 47(2), 488518. https://doi.org/10.1177/0149206319843985 CrossRefGoogle Scholar
Landy, J. F., Jia, M. (Liam), Ding, I. L., Viganola, D., Tierney, W., Dreber, A., Johannesson, M., Pfeiffer, T., Ebersole, C. R., Gronau, Q. F., Ly, A., van den Bergh, D., Marsman, M., Derks, K., Wagenmakers, E.-J., Proctor, A., Bartels, D. M., Bauman, C. W., Brady, W. J., … Uhlmann, E. L. (2020). Crowdsourcing hypothesis tests: Making transparent how design choices shape research results. Psychological Bulletin, 146(5), 451479. https://doi.org/10.1037/bul0000220 CrossRefGoogle ScholarPubMed
Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., Grahe, J. E., McCarthy, R. J., Musser, E. D., Antfolk, J., Castille, C. M., Evans, T. R., Fiedler, S., Flake, J. K., Forero, D. A., Janssen, S. M. J., Keene, J. R., Protzko, J., Aczel, B., … Chartier, C. R. (2018). The psychological science accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501515. https://doi.org/10.1177/2515245918797607 CrossRefGoogle ScholarPubMed
Uhlmann, E. L., Ebersole, C., Chartier, C., Errington, T., Kidwell, M., Lai, C. K., McCarthy, R. J., Riegelman, A., Silberzahn, R., & Nosek, B. A. (2019). Scientific utopia III: Crowdsourcing science. Perspectives on Psychological Science, 14(5), 711733. https://doi.org/doi.org/10.1177/1745691619850561 CrossRefGoogle ScholarPubMed