1. Introduction
AI applications give rise to a wide range of ethical and epistemic challenges. On the side of epistemology, researchers have highlighted AI outcomes’ limited replicability and robustness and misalignment with users’ expectations and needs (e.g., Herrmann et al. Reference Herrmann, Lange, Eggensperger, Casalicchio, Wever, Feurer, Rügamer, Hüllermeier, Boulesteix and Bischl2024a; Hutson Reference Hutson2018; Messeri and Crockett Reference Messeri and Crockett2024; Semmelrock et al. Reference Semmelrock, Ross-Hellauer, Kopeinik, Theiler, Haberl, Thalmann and Kowald2025). On the side of ethics, scholars have raised a plethora of issues from algorithmic bias and fairness-related harms to AI’s potential to undermine trust or human agency (e.g., Fazelpour and Danks Reference Fazelpour and Danks2021; Floridi Reference Floridi2023; Selbst et al. Reference Selbst, Boyd, Friedler, Venkatasubramanian and Vertesi2019).
This paper investigates a specific culture of interdisciplinarity that has gained traction at the intersection of applied AI and ethics: a mode of research that seeks to mitigate social and ethical harms by importing norms, methodologies and governance frameworks from established disciplines, such as the social sciences or medicine into applied AI.Footnote 1 While acknowledging its contributions, I show how this importation presupposes and endorses a framing of applied AI as a domain separate from established disciplines. Yet, such separation is what in the first place allows AI practitioners to operate outside disciplinary norms that have evolved precisely to prevent harms now associated with AI applications. Were AI applications understood as situated firmly within these disciplines, practitioners would already be accountable to their ethical and methodological standards. Paradoxically, this particular culture of interdisciplinarity might thus reinforce a problematic disciplinary isolation of applied AI at the source of the very ethical issues it seeks to mitigate. It can risk treating an ever-growing list of symptoms while playing into their cause.
In response, I outline three paths forward. They differ in the extent to which they view applied AI’s disciplinary status as primarily a matter of framing or institutional reality. Rather than advocating for one approach, I sketch how each path might benefit from closer engagement with scholarship in the social studies of interdisciplinarity. As a former AI practitioner now engaged in the ethics of science and technology, my motivation is to encourage further research at the intersection of applied AI, ethics and the social studies of interdisciplinarity and disciplinarity.
2. A specific culture of interdisciplinarity
Over the past decade, ethical and social issues related to AI have garnered increasing attention. A wide range of research has emerged that attempts to confront these issues – with much of it being of interdisciplinary character. Amongst these efforts, a specific mode of interdisciplinary engagement seeks to mitigate ethical harms by importing norms, methodologies and governance frameworks from established disciplines into applied AI.Footnote 2
This section illustrates and problematizes this culture of interdisciplinarity. It does so in four parts. First, I introduce research on AI fairness in which scholars have tied fairness-related harms to AI practitioners’ tendency to conflate social concepts and their operationalizations. Second, I illustrate this kind of research and its implications in the case of educational AI. Third, I outline how ethically-motivated research at the intersection of social science measurement and AI can paradoxically contribute to some of the ethical issues it seeks to mitigate. Lastly, I show how this research stands as representative for a broader culture of ethically-motivated interdisciplinary scholarship at the intersection of established disciplines and applied AI.
2.1. Introducing interdisciplinary research on AI fairness and construct validity
The particular culture of interdisciplinarity I seek to examine here can be illustrated by prominent work at the intersection of social science measurement and applied AI. Such research often begins by acknowledging that significant ethical issues can arise when AI practitioners insufficiently distinguish between the concept of interest and the particular data their model predicts. Those engaged in this endeavor note that AI practitioners “are often inclined to collapse the distinctions between constructs and their operationalizations, either colloquially or epistemically. […]” (Jacobs and Wallach Reference Jacobs and Wallach2021, p. 10) and that “many times in practice, when machine learning makes claims, it forgets to recognize and communicate that the measurements serving as ‘ground truth’ are a black box that hide problems of measurement, and that the ground truth is not construct itself” (Malik Reference Malik2020, p. 11).Footnote 3 Concretely, this might mean that practitioners fail to distinguish between the concept of depression and the outcome of a particular psychometric test of depression, the concept of socioeconomic status and a specific measurement of wealth, or an employee’s work performance and her ranking in a particular evaluation matrix that an AI model is trained to predict.Footnote 4
Scholars have emphasized the significant ethical stakes involved in this practice, often focusing on how overlooked discrepancies between a target concept and its operationalization can generate fairness-related harms in decision-making contexts. A prominent case in point is the insufficient recognition of the potential incongruities between the normative concept of “care need” – central to healthcare resource allocation – and its possible operational proxy, “care costs.” When such conceptual distinctions are neglected, factors like a patient’s insurance status, which influences billing practices, can distort algorithmic recommendations or policy decisions, ultimately resulting in inequitable care distribution (Jacobs and Wallach Reference Jacobs and Wallach2021, p. 4; Tal, Reference Tal2023). Analogously, underacknowledged differences between poverty and a particular operationalization used by AI-driven poverty prediction (Mussgnug Reference Mussgnug2022) or between teacher effectiveness and a particular metric of it (O’Neil Reference O’Neil2016) can result in fairness-related harms.Footnote 5
In response to these problems, researchers have suggested importing methodological resources and insights around measurement modeling and validation from disciplines in the social sciences into AI research (e.g., Guerdan et al. Reference Guerdan, Holstein, Steven and Wu2022; Jacobs and Wallach Reference Jacobs and Wallach2021; Kleinberg et al. Reference Kleinberg, Lakkaraju, Leskovec, Ludwig and Mullainathan2018). One prominent and influential example of this approach is a 2021 FAccT paper by Abigail Z. Jacobs and Hanna Wallach.Footnote 6
Jacobs and Wallach begin by introducing measurement modeling and measurement validation. Roughly speaking, measurement validation and modeling both concern the relationship between a measurement and a concept of interest – to what extent any particular metric (e.g., a poverty metric) captures relevant features of often-complex and multifaceted social and human concepts (e.g., poverty). Measurement modeling, more specifically, denotes the methods by which theoretical unobservable constructs are linked to observable properties. This process of modeling or “operationalizing” the unobservable theoretical construct in terms of observable properties necessarily involves making assumptions about the nature of the construct and its relationship to observable properties. Such necessary assumptions, however, introduce the potential for mismatches between theoretical understandings and measurements.Footnote 7 Jacobs and Wallach illustrate this by highlighting the assumptions and choices involved in measuring socioeconomic status, teacher effectiveness, recidivism risk, and patient benefit. Measurement validation describes a wide range of methods that can be employed to scrutinize potential mismatches between constructs and their operationalizations. These methods target various different subtypes of validity such as face validity, content validity, convergent validity, discriminant validity, predictive validity, hypothesis validity and consequential validity.
Measurement modeling and validation are commonplace for scholarship in the social and human sciences where researchers have developed targeted and sophisticated methodologies to carefully craft and understand the relationship between theoretical constructs and their operationalizations. In contrast, Jacobs and Wallach note that such methodological resources are generally absent from work on computational systems such as AI applications, leading researchers to insufficiently appreciate the complexities involved in social measurement and the potentially far-reaching fairness-related consequences of mismatches between constructs and operationalizations. The solution, so their paper suggests, lies in importing methodologies such as measurement modeling and validation from the social and human sciences to applied AI.
Jacobs and Wallach are no doubt right to argue that the integration of measurement modeling and validation into AI would constitute an improvement over the status quo and might, indeed, help mitigate some of the related fairness-harms caused by applied AI. My claim in this paper, however, is that this effort and the broader culture of interdisciplinary research it represents can, at the same time, contribute to the fundamental cause of the very ethical issues they try to mitigate. What seems paradoxical becomes clearer when we look at a concrete area of applied AI where suggestions such as those by Jacobs and Wallach have been adopted.
2.2. Educational AI
Early applications of AI methods such as machine learning to education took place under the label of “learning analytics.” In the early 2010s, learning analytics began to position itself as an independent discipline distinct from the more established field of educational science (Siemens Reference Siemens2013). Today, most uses of AI in education fall under the banner of “educational AI.” I suspect that this is partly an act of rebranding. It roughly tracks with how much work on applied machine learning rebranded from “data analytics/data science” to “applied AI” in the mid 2010s, as the emergence of deep learning breathed new life into the, at this point somewhat antiquated and often negatively connotated, label of AI. Granted, the precise relationship between learning analytics and educational AI is likely much more complex (cf. Rienties et al. Reference Rienties, Køhler Simonsen and Herodotou2020). But such complexity should not distract us from the point most important for the purposes of my argument: AI applications in educational and pedagogical settings have commonly been viewed as somewhat outside of established disciplines such as educational science.
In the space of educational AI and learning analytics, we can find precisely what Jacobs and Wallach rightfully problematize: AI applications often conflate concepts with the data being predicted. As Selwyn (Reference Selwyn2019, p. 12) notes, […] many laypeople tend not to treat the data they are presented with as “proxies” or even “indicators.” Instead, any data tends to slip quickly into being understood as a direct measure […].” As a result, the evaluation of a machine learning models is reduced to an assessment of how well it estimates this data, rather than involving more substantive engagement with how data measures (or fails to measure) the concept of interest (e.g., learning). In a study on enrollment, for instance, the authors note that comparison with the predicted data “serves as an empirical way of guaranteeing the accuracy of any ‘scoring’ done by the final data-mining model” (Luan and Zhao Reference Luan and Zhao2006, p. 119). Such conflations of proxies are particularly problematic when the gap between what is measured and what is intended to be measured manifests differentially across demographic groups. Students whose learning behaviors or circumstances diverge from majority patterns used in model development can then face systematic disadvantage and fairness-related harms at the hands of poorly-validated educational AI systems (Baker and Hawn Reference Baker and Hawn2022; Gardner et al. Reference Gardner, Brooks and Baker2019).
In line with Jacobs and Wallach, many critical commentaries have urged such AI applications to import established measurement modeling and validation methodologies from educational science (Gašević et al. Reference Gašević, Dawson and Siemens2015; Knobbout and Van Der Stappen Reference Knobbout and Van Der Stappen2020). To some extent, such calls for methodological transfer have been successful as increasingly many AI applications in education conduct some form of measurement modeling and validation. But often such adoption happens in a rather limited way. Appreciating the potential risks and eager to signal the legitimacy of their discipline, practitioners adopt measurement validation and modeling but often focus primarily on those elements that can easily be simplified into statistical requirements, while ignoring more contextual, theoretical, or normative aspects of measurement (cf. Winne Reference Winne2020; see also Alexandrova and Haybron Reference Alexandrova and Haybron2016).
This dynamic stands representative for a pattern characteristic in the transfer of existing disciplinary methodologies, ethics, or standards into applied AI. Disciplinary norms and methods are (i) imported to address an ethical or epistemic shortcoming but also to signal the field’s legitimacy, (ii) transformed into narrow technical requirements, and (iii) ultimately operate independently of the broader methodological, disciplinary, and professional structures that originally supported them. In the case of educational AI, this meant that validity was often reduced to statistical tests and those subkinds of validity amenable to quantitative analysis. At the same time, practitioners often continued to sidestep broader theoretical and cultural complexities surrounding concepts such as learning or education, and complications arising from how education and learning are shaped by highly contextual, social, or individualistic features. However, even statistically validated measurements can be ineffective and unethical if they are employed in ways that ignore the social, institutional or personal features of an educational context, are embedded in opaque workflows, used for summative control, encourage student surveillance, or are deployed without continuous monitoring (cf. Ferguson and Clow Reference Ferguson and Clow2017).
Importing measurement validation methodologies from educational science to applied AI also does little to address other, often more fundamental, ways in which educational AI can conflict with ethical values, standards, pedagogical principles, methodological norms and best practices common to established research in the educational domain. Central to pedagogical work and research in educational science are principles not only such as informed consent, equity, and privacy, but also student autonomy, teacher discretion, and an emphasis of the highly complex, nuanced, socially-situated nature of learning. The very act of quantification and classification central to educational AI and the ways in which applications are marketed, however, can grossly underestimate the complexities of learning, encourage performativity and introduce threats of digital surveillance (Knox et al. Reference Knox, Williamson and Bayne2020). Not only the lack of measurement validation but various other characteristics of educational AI can thus stand in direct conflict with the norms, methods, values and best practices that educational science has cultivated – often through a slow and painful process of learning from past mistakes. This point is not limited to discriminative AI but extends to how work on generative AI in education can stand in opposition to various established pedagogical principles and values.
This leads us to a more general insight: Many ethical issues surrounding educational AI arise precisely because of the ways in which such work is regarded as external to the existing normative and methodological frameworks of pedagogical practice and educational science. Research that seeks to translate existing methods, best practices, or ethical standards from educational science and pedagogical practice to educational AI rests upon this very disciplinary separation. What is more, the (sometimes rather superficial) adoption can even serve to legitimize educational AI’s status, further entrenching this disciplinary arrangement.
2.3. Critiquing interdisciplinary research on AI fairness and measurement validation
Having briefly outlined the case of educational AI, the broader issue can come into focus. I argue that work on measurement validation and AI can rest upon and buy into a cause of the very harms it aims to alleviate: an understanding or pursuit of AI applications as belonging to a distinct field of inquiry, external to existing disciplines. It presupposes and reinforces a problematic separation: that AI applications, even when deployed for purposes common in educational, healthcare, economics or public policy, operate outside the normative and methodological frameworks of those disciplines. The very need to reintroduce disciplinary methodologies, best practices and ethical frameworks arises only as AI is framed or pursued as autonomous from those normative and epistemic infrastructures. Yet research on fair AI and social measurement often fails to critically interrogate the disciplinary assumptions and demarcations that structure its own practice. The result is a paradox. The attempt to mitigate fairness-related harms relies on a conception of applied AI that endorses the very condition that gives rise to many of those and countless other harms.
This point is clarified when considering the contrasting approach. Suppose we emphatically identify predictive AI applications discussed in these research papers as novel tools for classification and quantification firmly within disciplines, such as psychology, education, medicine or political science (cf. Mussgnug Reference Mussgnug2022). In this case, practitioners would already be accountable to the particular domain-specific methodological norms such as different practices of measurement modeling and validation that have developed in distinctive ways across social-science disciplines in order to foster socially responsible and reliable research.
Much work would still need to be done in order to better understand how measurement validation and other methodologies must be adjusted to the particularities of AI applications. At the same time, this approach eliminates the need to import hitherto foreign methodological resources from disciplines in the social sciences to the (purported) realm of applied AI. In this framing, the requirement to engage in validation or other domain-specific measurement practices follows as an internal disciplinary obligation, not an optional enhancement. This highlights how the need for importing measurement validation and modeling only ever emerges as a result of pursuing or framing applied AI as a domain external to existing social science disciplines and their methodological norms.
2.4. Friend or foe to ethical AI?
Work on AI fairness and measurement validation is only one example of a broader culture of interdisciplinarity that has gained traction at the intersection of applied AI and ethics: a mode of research that seeks to mitigate social and ethical harms by importing norms, methodologies and governance frameworks from established disciplines, such as the social sciences or medicine into applied AI. For instance, some have turned toward fields such as medicine to identify forms of ethical and regulatory governance for emerging AI applications. AI principalism, for instance, adapts an ethical framework based on high-level principles from biomedical ethics (e.g., Beauchamp & Childress, Reference Beauchamp and Childress2013) to AI governance (cf. Mittelstadt Reference Mittelstadt2019). Endorsing an “FDA for AI,” some have argued for adapting frameworks for medical regulation and licensing to AI systems (Carpenter & Ezell, Reference Carpenter and Ezell2024). Yet others have suggested translating principles of justice from the realm of public institutions to AI applications (Westerstrand Reference Westerstrand2024).
Such interdisciplinary research approaches applied AI as its own domain, a realm of practice distinct and separate from disciplines such as medicine, finance, psychology, law, or education. This understanding of applied AI as its own autonomous domain externalizes existing disciplinary norms, methodologies, theoretical insights, governance practices cultivated within those fields. It casts applied AI as somewhat of a blank slate, ethically and epistemically uncharted territory, a terra nullius (Mussgnug Reference Mussgnug2025). The culture of interdisciplinary AI ethics discussed here is keenly aware of the problems caused by this normative vacuum and the relevance of existing disciplinary resources, which it seeks to re-introduce. In doing so, however, this culture of interdisciplinarity can simultaneously sustain and contribute to the very disciplinary independence of applied AI that causes the very problems it attempts to solve. It risks treating an ever-growing list of symptoms while playing into their cause.
3. Three paths forward
How should we respond to this issue? As I see it, there are three options available – which depend, in part, on the extent to which one considers this a problem that arises from a certain framing of applied AI research or one that stems from an institutional and sociological disciplinary reality. First, we might hold that the problem lies in a problematic depiction of applied AI research as a distinct discipline. Instead of presenting the kind of research discussed as an interdisciplinary translational exercise, we would reframe it as an act of holding AI applications accountable to the epistemic and ethical frameworks governing the disciplinary fields within which they operate. Second, we might maintain that the organization of applied AI research and self-understanding of practitioners is such that we must acknowledge applied AI as a distinct discipline but that the problems arising from this disciplinary autonomy are so grave as to warrant reformation of this status quo. For reasons which I outline below, I believe that both options mentioned so far converge on the same actionable insight. Lastly, we might hold that the benefits offered by the pursuit of applied AI as a distinct discipline outweigh its ethical downsides. Even in this case, however, we can still encourage greater critical engagement with the complications raised in this paper.
The goal of this section is not to settle this debate. Rather I seek to emphasize that in exploring this solution space, work promoting ethical AI might benefit from closer engagement with scholarship in the social studies of interdisciplinarity and disciplinarity. To this end, I contextualize the three options outlined below within this scholarship. As a former AI practitioner now engaged in the normative philosophy of science and technology, I do not aspire for my depiction of the social studies of (inter)disciplinarity to be authoritative or comprehensive. I merely hope to do enough to establish the importance of these considerations for the issue this paper raised and to encourage further research at the intersection of AI ethics and the social studies of interdisciplinarity.
3.1. Option 1: Pseudo-interdisciplinarity and a different framing
Our first option is to simply deny that the application of AI in science forms part of a distinct discipline of applied AI. Ethically motivated research efforts presented as interdisciplinary, are then actually a particular form of pseudo-interdisciplinarity. Heinz Heckhausen (Reference Heckhausen1972, p. 87) originally characterizes pseudo-interdisciplinarity as “the erroneous proposition that sharing analytical tools such as mathematical models of computer simulation constitutes ‘intrinsic’ interdisciplinarity.” The research presented here, however, might constitute a fascinating second case of pseudo-interdisciplinarity, where a seeming interdisciplinarity emerges from the questionable framing of certain research as a distinct discipline. In other words, it presents a case where pseudo-interdisciplinarity arises from the pseudo-disciplinarity of applied AI.
Going down this route would not deny the existence of a discipline which is concerned with the study of AI. Detailed historical research and current institutional realities makes the case for the emergence of “artificial intelligence” as a discipline, distinctively combining influences from fields, such as cognitive science, computer science and philosophy.Footnote 8 Reframing Apostel and coauthors (Reference Apostel, Berger, Briggs and Michaud1972, p. 9), AI is one of many cases where “the ‘inter-discipline’ of yesterday is the ‘discipline’ of today.” But accepting the disciplinary status of artificial intelligence would not commit us to understanding work in applied AI as part of this discipline. A heuristic can help clarify this complex picture (and, so I believe, can be instructive for research on the interdisciplinarity and ethics of AI more broadly).
The term “artificial intelligence” denotes both a task and a tool. As a task, AI is a broad project engaged in understanding and working toward (forms of) machine intelligence. It is the kind of project laid out most famously in the proposal for the Dartmouth summer workshop, in which McCarthy et al. (1955/Reference McCarthy, Minsky, Rochester and Shannon2006, p. 2) propose work “on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” As a tool, AI denotes a resource that can be employed, for instance, in the service of constructing computational predictive models within diverse areas of application.
In applied AI, AI denotes primarily as a tool. It focuses on implementing artificial intelligence solutions to solve real-world problems. Clearly, such work in applied AI is both informed by and feeds back into the broader project of constructing and understanding AI (AI as a task). Nonetheless, it is distinct from it. And we can acknowledge the discipline of artificial intelligence, while holding that concrete work in applying AI to issues, such as protein structure prediction (Jumper et al. Reference Jumper, Evans, Pritzel, Green, Figurnov, Ronneberger, Tunyasuvunakool, Bates, Žídek, Potapenko, Bridgland, Meyer, Kohl, Ballard, Cowie, Romera-Paredes, Nikolov, Jain and Adler2021), poverty estimation (Aiken et al. Reference Aiken, Bellue, Karlan, Udry and Blumenstock2022), personalized learning (Chen Reference Chen2025) or depression screening (Priya et al. Reference Priya, Garg and Tigga2020), is best understood as the use of a novel tool within existing disciplinary contexts rather than its own distinct disciplinary endeavor.
An analogy between AI and computers can help further elucidate this picture. Today, computer science is its own discipline with institutional structures, professional organizations, and a shared research goal. Very roughly, computer science is concerned with the study and development of computational systems. Involved in it is both the theoretical study of computation and the development of concrete hardware and software systems.Footnote 9 But what about cases where we use a computer to instantiate an agent-based simulation of an economic market or to construct a complex statistical model of a social psychological phenomenon? I suspect that it would be far-fetched to consider these as instances of computer science today.Footnote 10 Instead, I suspect that we are inclined to consider them as the use of a tool (a particular computer hardware and software) firmly within fields such as biology or economics. One could argue that AI models are much like computers: While there is a study concerned with better understanding artificial intelligence or developing novel formal AI architectures, it would be questionable to include in this field the mere application of AI tools.Footnote 11
Rejecting applied AI’s status as a distinct discipline reveals the culture of interdisciplinarity examined here as a form of pseudo-interdisciplinarity – one whose very structure may counteract the ethical objectives it aspires to achieve. Acknowledging this does not entail abandoning efforts to highlight the importance of methodologies such as measurement validation, concepts such as measurement accuracy, or existing best practices and regulatory frameworks. Instead, it puts into focus an alternative framing: rather than pursuing the transplantation of disciplinary norms and resources into an ostensibly autonomous field of applied AI, we could prioritize the responsible integration of AI tools within existing disciplinary infrastructures.Footnote 12 This approach, I suspect, might offer a promising pathway toward realizing the ethical and social aspirations of such research.
Distinct disciplines have cultivated diverse sets of methodological and conceptual resources, norms, best practices, and governance and accountability frameworks. They have developed these resources in response to the particular ethical and epistemic challenges present in their research and as lessons learned from past failures. Among countless others, social scientists have established forms of concept validation to anticipate ethically-relevant mismatches between concepts and operationalizations, psychiatry has developed methodologies such as differential diagnosis in order to minimize the risk of misdiagnosis and associated harms (Frances Reference Frances2013), and medicine has adopted strict norms around privacy and informed consent in order to protect vulnerable patient communities (Belmont Report, 1979). For all justified criticism regarding disciplinary rigidity, it must also be acknowledged that these distinct disciplinary features also represent a certain historically cultivated wisdom, accumulated domain experience, and attentiveness to contextual complications (cf. Nissenbaum Reference Nissenbaum2009).
If AI applications were to be more firmly understood as novel elements within existing domains of practice, AI practitioners would already be accountable to their methodologies, norms, virtues, best practices and regulations. Conversely, as interdisciplinary research frames applied AI as a distinct domain, these contextual norms are external to AI practice. As such, they require separate justification and AI practitioners might easily interpret adherence to them as an optional effort rather than an inherent obligation – a problem that already shows in limited compliance with imported methodologies and ethical frameworks (cf. Hagendorff Reference Hagendorff2020; Mittelstadt Reference Mittelstadt2019; Munn Reference Munn2023).
3.2. Option 2: Challenging existing disciplinary structures
It might be argued that a mere reframing misses the point. The disciplinarity of applied AI, so one might argue, stems from an institutional and sociological reality. On this reading, the organization of research and self-understanding of researchers is such that applied AI is indeed a distinct discipline. Nonetheless, I believe that we might still have reason and room to articulate a counter-imaginary to those disciplinary realities in light of the problems they pose to ethical AI. Such a proposal might seem far-fetched. However, work in the social studies of science reveals to us that emerging disciplines are often much more contingent than ordinarily assumed.
As Andrew Barry and Georgina Born note (Reference Barry, Born, Barry and Born2013, p. 8) “disciplinary boundaries are neither entirely fixed nor fluid; rather, they are relational and in formation.” Thereby Barry and Born echo influential work by Timothy Lenoir. Lenoir (Reference Lenoir1997) opposes a naïve conception of scientific knowledge as disinterested and a resulting view of scientific disciplines as forming independently and quasi-necessarily around their subject matter. Instead, Lenoir explicates how disciplines emerge within cultural, political, social and economic struggle. Disciplines are political structures. They mediate between the realm of scientific research and broader cultural, economic, and social forces.Footnote 13 As he puts it: “they are creatures of history reflecting human habits and preferences rather than a fixed order of nature” (Lenoir Reference Lenoir1997, p. 58).
Applied AI, to the extent that it constitutes an independent discipline, is no exception. Its emergence as a discipline was equally shaped in negotiation with complex political, social and economic dynamics. A comprehensive account or even a sketch of this development goes beyond the scope of this paper (and, quite frankly, my expertise). Instead, I seek to focus on how the disciplinary independence of applied AI can tie into broader economic motives specifically through the ways in which it circumvents existing domain-specific norms.
If applied AI denotes a novel autonomous discipline, practitioners are (at least prima facie) freed from engaging with the often-extensive methodological, theoretical, ethical and regulatory norms that govern existing disciplines. And this serves well both the ideological environment and economic motives surrounding AI applications. In 2020, industry developed 96% of the biggest AI models. As of 2020, 70% of PhD students went to industry (a percentage that has likely increased by now) and the hiring of academic faculty working on AI into industry has witnessed an eightfold increase between 2006 and 2020 (Ahmed et al. Reference Ahmed, Wahed and Thompson2023). Closely tied to commercial interests, AI applications are often associated with themes of “disruption” or with the motivation “to move fast and break things” in efforts to rush products to market (Cook Reference Cook2020; Daub Reference Daub2020; Zuckerberg Reference Zuckerberg2012). Ignorance of the many resources, standards, and norms that existing disciplines have cultivated becomes instrumental to these ends.
Work in the social studies of science further stresses how disciplines are sustained through shared institutions, culture and practice. Part of this is so-called “boundary work” – practices that establish and maintain disciplinary identity and distinctiveness (Gieryn Reference Gieryn1999). The concept of boundary work enables us to theorize what became evident in the previous section. Since the sometimes superficial and selective import of existing methodologies, ethical frameworks, or best practices can serve to signal applied AI’s (and its subfields’) disciplinary legitimacy, it constitutes boundary work.
Acknowledging both the contingent nature of discipline formation and the importance of boundary work in sustaining disciplinary identity, lends credence to two insights. First, how the disciplinarity of applied AI might have also emerged precisely to circumvent existing ethical and epistemic norms in service of specific economic interests. And, second, how challenging a certain framing of ethically motivated scholarship on applied AI as translating epistemic and ethical frameworks from established disciplines might itself challenge the disciplinarity of applied AI.Footnote 14 Option one and two thus converge on how to respond to the problem outlined in Section 2. We are asked to reject the framing of such research as translational or interdisciplinary in light of its role in legitimizing the disciplinary distinctiveness of applied AI – whether we take this disciplinary status to be contestable or only objectionable.
3.3. Option 3: Exploring different modes of interdisciplinary research
A third option would be to accept the disciplinary status of applied AI (and its subdisciplines), holding that it offers benefits that outweigh the ethical drawbacks outlined in this paper. For instance, the organization of applied AI in a distinct discipline might allow practitioners to better leverage economic opportunities, share transferable insights across fields of AI application, or devise overarching ethical frameworks that address issues arising from the technical specificities of AI tools (e.g., opacity).
However, nothing thereby prevents us from more critical engagement with the drawbacks that such disciplinary independence brings and with how well-intended mitigation measures can paradoxically reinforce these issues. Even when granting applied AI disciplinary status, we might still explore alternative, more ethically beneficial, ways of pursuing interdisciplinary research. To this end, work in the social studies of science can again serve as important resource for both characterizing the precise nature of the culture of interdisciplinary research problematized here and offering visions for alternative interdisciplinary engagement.
The existing taxonomy around the genus of interdisciplinarity is both rich and complex. At the most general level, one can distinguish between multi-, inter- and transdisciplinarity. Multidisciplinarity merely involves the juxtaposition or cooperation of disciplines, which continue to operate firmly within established disciplinary framings (Barry and Born Reference Barry, Born, Barry and Born2013, p. 9). In contrast, interdisciplinarity involves an effort to proactively integrate or synthesize perspectives and methodologies from distinct disciplines (Barry and Born Reference Barry, Born, Barry and Born2013, p. 8). Characterizations of transdisciplinarity are somewhat more contested but generally denote a more radical attempt to transcend disciplinary categorizations, as well as strict demarcations between scientific experts and non-academic stakeholders (Pathways to Sustainability, n.d.).
The culture of ethically motivated research at the intersection of applied AI and established disciplines explored in this paper presents itself as pursuing the proactive integration of methodological, moral, and conceptual resources from one set of disciplines (psychology, education, political science, etc.) to applied AI. As such it presents a case of interdisciplinary research. Interdisciplinarity (ID) can be further distinguished along multiple dimensions. For instance, one can differentiate between methodological ID and theoretical ID (Bruun et al. Reference Bruun, Hukkinen, Huutoniemi and Klein2005). As Klein (Reference Klein, Frodeman, Klein and Mitcham2010, p. 19) notes: “The typical motivation in Methodological ID is to improve the quality of results. The typical activity is borrowing a method or concept from another discipline […].” In contrast, theoretical ID “connotes a more comprehensive general view and epistemological form. The outcomes include conceptual frameworks for analysis of particular problems, integration of propositions across disciplines, and new syntheses based on continuities between models and analogies” (Klein Reference Klein, Frodeman, Klein and Mitcham2010, p. 20). Whereas methodological ID is exclusively instrumental and often somewhat opportunistic, theoretical ID can be critical, involving the interrogation and transformation of existing disciplinary knowledge structures (Klein Reference Klein, Frodeman, Klein and Mitcham2010, p. 23). Within these categories, we can identify the culture of interdisciplinarity problematized in this paper as an instance of instrumental and somewhat opportunistic methodological ID.
Another distinction is made by Andrew Barry and Georgina Born (2008), who classify ID along integrative-synthesis (the integration and synthesis of disciplinary components), subordination-service (where one discipline is subordinated or put into service of another), and agonistic–antagonistic modes (emerging from an oppositional relationship to existing disciplinary knowledge practices). Barry and Born also characterize three distinct logics that can motivate interdisciplinary research: a logic of accountability responding to societal demands, a logic of innovation responding to economic opportunities, and an ontological logic which is motivated by a more fundamental reconceptualization of the objects of research.
When it comes to research at the intersection of applied AI and established disciplines, established disciplines are subordinated in a service-role. Within what Barry and Born label a logic of accountability, disciplinary resources are imported to mitigate ethical harms caused by AI applications and to respond to social and political demands for responsible AI practice. As shown, however, this culture of interdisciplinary research can sometimes result in only superficial adoption and the reframing of complex epistemic and ethical conventions into oversimplified technical engineering problems. In doing so, imported resources are stripped of their context-specificity, their theoretical grounding, and of their embeddedness into broader epistemic and ethical practice. This represents what critical voices have identified as a limited additive approach characterized by only superficial engagement of machine learning and AI research with existing disciplines, particularly in the social sciences and humanities. Absent engagement with broader supporting factors and with their often nuanced application, imported resources can easily fail to have the ethically-beneficial impact scholars are aiming for.
Here scholarship in the social studies of science on the different kinds of interdisciplinarity can provide us with concrete case studies and theoretical analysis on different forms of interdisciplinary research. Even when we insist on the net-benefit of applied AI’s disciplinary independence, these resources might help identify and pursue modes of interdisciplinary research that better serve the intended goal of more ethical practice and mitigate some of the drawbacks highlighted in this paper.
4. Conclusion
This paper sought to problematize a certain culture of interdisciplinarity at the intersection of applied AI and ethics.Footnote 15 While this line of research rightly emphasizes the importance of existing methodologies and norms, I have called for more critical engagement with how this particular culture of interdisciplinarity can legitimize the notion of applied AI as a distinct field of research – absent any acknowledgment of how the disciplinary independence of applied AI can give rise to some of the very ethical problems it seeks to address. I take raising this problem to be the primary contribution of my paper.
In response, I have outlined three paths forward. My examination hopes to stimulate further meta-level reflection on how assumptions and realities of applied AI’s disciplinarity shape the pursuit of interdisciplinarity in ethical AI. First, we might hold that the problem lies in a problematic framing of applied AI research as a distinct discipline. Consequently, we would call for the kind of research discussed to be appropriately presented not as interdisciplinary but as an act of holding AI applications accountable to the epistemic and ethical frameworks governing the disciplinary fields within which they operate. Second, we might hold that the organization of applied AI research and self-understanding of practitioners is such that we must acknowledge applied AI as a distinct discipline but that the problems arising from this disciplinary autonomy are so grave as to warrant reformation of this status quo. Lastly, we might hold that the distinct benefits offered by the pursuit of applied AI as a distinct discipline outweigh its ethical downsides, while still encouraging greater critical engagement with the complications raised in this paper.
I did not argue for one of these options. Instead, my brief and high-level discussion sought to underscore how research on interdisciplinarity in the social studies of science can serve as important resource for navigating either of these paths forward. In doing so, I hope to stimulate more substantial collaborations in exploring how different cultures of interdisciplinarity relate to the pursuit of more ethical AI.
Acknowledgements
I would like to thank Elisa Cardamone and Meghan M. Shea for their feedback on earlier versions of this article. I also thank two anonymous reviewers for their exceptionally productive comments.
Funding statement
Work on this paper was supported by a Baillie Gifford Studentship in AI ethics at the Centre for Technomoral Futures, University of Edinburgh and an Interdisciplinary Ethics Fellowship at Stanford’s McCoy Family Center for Ethics in Society and Apple University.
Competing Interests
The author declares no competing interests.