Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-26T04:41:03.620Z Has data issue: false hasContentIssue false

Diversity of contributions is not efficient but is essential for science

Published online by Cambridge University Press:  05 February 2024

Catherine T. Shea
Affiliation:
Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA, USA. ctshea@andrew.cmu.edu awoolley@andrew.cmu.edu https://www.cmu.edu/tepper/faculty-and-research/faculty-by-area/profiles/shea-catherine.html https://scholars.cmu.edu/418-anita-woolley
Anita Williams Woolley*
Affiliation:
Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA, USA. ctshea@andrew.cmu.edu awoolley@andrew.cmu.edu https://www.cmu.edu/tepper/faculty-and-research/faculty-by-area/profiles/shea-catherine.html https://scholars.cmu.edu/418-anita-woolley
*
*Corresponding author.

Abstract

Dominant paradigms in science foster integration of research findings, but at what cost? Forcing convergence requires centralizing decision-making authority, and risks reducing the diversity of methods and contributors, both of which are essential for the breakthrough ideas that advance science.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

The integrative experiment design approach advocated by Almaatouq et al. represents an intervention to accelerate the convergence of research findings in the social and behavioral sciences. Observations from the evolution of scientific fields over centuries lead to questions about whether the results of such an intervention would be uniformly positive. According to Kuhn (Reference Kuhn1962), all scientific fields go through initial periods, sometimes spanning centuries as in the case of physics, during which many concepts and competing models are proposed. This continues until a breakthrough insight reconciles discrepancies and establishes a dominant paradigm around which the field coheres. Dominant paradigms enable what Kuhn (Reference Kuhn1962) calls “normal science,” coordinated efforts to refine the paradigm and build evidence; however, the power structure surrounding a dominant paradigm can suppress alternative perspectives, making it difficult to prompt its reconsideration.

These observations from the history of science suggest potential unintended consequences of the intervention to accelerate convergence that Almaatouq et al. propose. Two sources of concern are the power structures that typically evolve to maintain organizing paradigms, and the potential they have to overly constrain the breadth of inputs considered, both of which can be problematic for a young science focused on diverse, multifaceted phenomena.

Who decides?

Pfeffer (Reference Pfeffer1993) cautioned that fields with higher levels of consensus get that way via a core group of elite scholars who wield control. Imposing a framework to foster consensus requires some mechanism for decision making. For instance, what variables are included or receive more attention? When two groups of researchers have converged on the same topic, who gets the naming rights to the theoretical space? These issues are often sorted out via peer-review processes and citation of papers, which admittedly is not “efficient” but incorporates the judgments of many other researchers in the field, based on their assessment of the evidence. And, contrary to the authors' claim that no integrating frameworks exist, we point to a few recent examples in work on team process (i.e., Marks, Mathieu, & Zaccaro, Reference Marks, Mathieu and Zaccaro2001) and team structure (i.e., Hollenbeck, Beersma, & Schouten, Reference Hollenbeck, Beersma and Schouten2012) that have been built upon by others based on the evidence supporting them. In the approach proposed by Almaatouq et al., the dimensions are “mapped” onto the design space before the experiment is run, by a “cartographer” – but how does this occur?

The solution – as proposed – is for machine learning to make such decisions for us. While an elegant solution sidestepping the potential of an individual or group of individuals making the decisions, machine-learning algorithms by their very nature have bias baked into them (e.g., Fu, Aseri, Singh, & Srinivasan, Reference Fu, Aseri, Singh and Srinivasan2022). Furthermore, while machine learning undoubtedly has a substantial role to play in many areas of science, machine-learning models can only analyze the information provided, and are typically not able to identify variables that have not yet been considered but should be. While Almaatouq et al. would argue that such machines are flexible, and adaptive to change, we see this as overly optimistic. Indeed, across multiple disciplines and experiments, we can say with some certainty that a status quo – once set – is very difficult to change, as Kuhn's (Reference Kuhn1962) observations of the difficulty of challenging “dominant paradigms” demonstrates.

Another significant challenge stems from the strong incentives for researchers to introduce novel ideas in order to advance their careers. As Almaatouq et al. acknowledge, these incentives are at odds with efforts to promote convergence since there are few rewards for researchers who contribute to “normal science.” Though the authors attempt to brush this aside by pointing to examples from physics, it is important to note that fields requiring major infrastructure investment also tend to be more hierarchical, and also have significant struggles with other issues such as sexual harassment as well as gender gaps in participation and career length (Huang, Gates, Sinatra, & Barabási, Reference Huang, Gates, Sinatra and Barabási2020; National Academies of Science, Engineering, and Medicine, 2018). Thus the efficiency that can come from centralizing decision-making authority to accelerate convergence also risks introducing some of the known problems associated with consolidating power (Pfeffer, Reference Pfeffer1993).

Limiting diversity

The imposition of a framework for fostering convergence not only risks creating problematic power dynamics but also limiting the diversity of ideas in undesirable ways. Almaatouq et al. argue that their framework enables different studies to make their measures “commensurable.” They claim this can facilitate the integration of research using different methods, however, it most naturally lends itself to the use of the “high-throughput” techniques they mention, typically online experiments, to generate the volume of data needed for sampling the design space. This is likely to result in more uniformity in the methods and measures used. While some may see this as desirable, we point out that when different studies using different methods yield convergent patterns of results, the field can have greater confidence in those effects, as advocates of “full-cycle research” (e.g., Chatman & Flynn, Reference Chatman and Flynn2005) point out. Conversely, findings using the same measures and methods might make the results of different studies “commensurable,” but could mask limits to generalizability. Indeed, just as we have made great strides to sample beyond undergraduate students, we need to continue to push scientists to replicate and extend their work beyond that of online samples, as such samples are limited in their ability to capture rich behavioral outcomes. We also need to continue to broaden connections across contexts and disciplines to enable surprising new breakthroughs to emerge (Shi & Evans, Reference Shi and Evans2023).

The social and behavioral sciences are at an exciting nexus. Diversity is finally gaining traction: Historically underrepresented groups are bringing new theory and ideas to our historically homogenous field. Taken-for-granted knowledge is being falsified, or shown to only apply to the dominant groups. Exciting perspectives are just now being brought to fruition. To borrow from the author's terminology, the “unknown unknowns” are just starting to emerge due to the burgeoning diversity in the field. Will kicking off an intervention to force convergence, facilitated by machine learning, bake today's bias into algorithms that stymie the diversity that is just starting to take hold in our fields (Daft & Lewin, Reference Daft and Lewin1990)?

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

None.

References

Chatman, J. A., & Flynn, F. J. (2005). Full-cycle micro-organizational behavior research. Organization Science, 16(4), 434447.CrossRefGoogle Scholar
Daft, R. L., & Lewin, A. Y. (1990). Can organization studies begin to break out of the normal science straitjacket? An editorial essay. Organization Science, 1(1), 19.CrossRefGoogle Scholar
Fu, R., Aseri, M., Singh, P. V., & Srinivasan, K. (2022). “Un”fair machine learning algorithms. Management Science, 68(6), 41734195. https://doi.org/10.1287/mnsc.2021.4065CrossRefGoogle Scholar
Hollenbeck, J. R., Beersma, B., & Schouten, M. E. (2012). Beyond team types and taxonomies: A dimensional scaling conceptualization for team description. Academy of Management Review, 37(1), 82106.Google Scholar
Huang, J., Gates, A. J., Sinatra, R., & Barabási, A.-L. (2020). Historical comparison of gender inequality in scientific careers across countries and disciplines. Proceedings of the National Academy of Sciences of the United States of America, 117, 46094616. https://doi.org/10.1073/pnas.1914221117CrossRefGoogle ScholarPubMed
Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.Google Scholar
Marks, M. A., Mathieu, J. E., & Zaccaro, S. J. (2001). A temporally based framework and taxonomy of team processes. Academy of Management Review, 26(3), 356376.CrossRefGoogle Scholar
National Academies of Sciences, Engineering, and Medicine. (2018). Sexual harassment of women: Climate, culture, and consequences in academic sciences, engineering, and medicine. National Academies Press. https://doi.org/10.17226/24994Google Scholar
Pfeffer, J. (1993). Barriers to the advance of organizational science: Paradigm development as a dependent variable. Academy of Management Review, 18(4), 599620.CrossRefGoogle Scholar
Shi, F., & Evans, J. (2023). Surprising combinations of research contents and contexts are related to impact and emerge with scientific outsiders from distant disciplines. Nature Communications, 14(1), Article 1. https://doi.org/10.1038/s41467-023-36741-4Google ScholarPubMed