Hostname: page-component-6b88cc9666-cdh4f Total loading time: 0 Render date: 2026-02-17T11:18:42.502Z Has data issue: false hasContentIssue false

Interdisciplinary research: Friend or foe to ethical AI?

Published online by Cambridge University Press:  16 February 2026

Alexander Martin Mussgnug*
Affiliation:
McCoy Family Center for Ethics in Society, Stanford University, Stanford, California CA, USA Apple University, Cupertino,California CA, USA
Rights & Permissions [Opens in a new window]

Abstract

This paper investigates a specific culture of interdisciplinarity that has gained traction at the intersection of applied AI and ethics. To address social and ethical harms of AI applications, scholars have suggested importing norms, methodologies and governance frameworks from established disciplines such as the social sciences or medicine. I show how this importation presupposes and endorses a framing of applied AI as a domain separate from established disciplines. Yet, such separation is what initially allows AI practitioners to operate outside those disciplinary norms that have evolved to prevent harms now associated with AI applications. Conversely, if AI applications were understood as situated firmly within these disciplines, practitioners would already be accountable to their norms and standards. Paradoxically, this culture of interdisciplinarity might thus reinforce a problematic disciplinary isolation of applied AI underlying the very ethical issues it seeks to mitigate – fighting symptoms while playing into their cause. In response, I outline three paths forward.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press.

1. Introduction

AI applications give rise to a wide range of ethical and epistemic challenges. On the side of epistemology, researchers have highlighted AI outcomes’ limited replicability and robustness and misalignment with users’ expectations and needs (e.g., Herrmann et al. Reference Herrmann, Lange, Eggensperger, Casalicchio, Wever, Feurer, Rügamer, Hüllermeier, Boulesteix and Bischl2024a; Hutson Reference Hutson2018; Messeri and Crockett Reference Messeri and Crockett2024; Semmelrock et al. Reference Semmelrock, Ross-Hellauer, Kopeinik, Theiler, Haberl, Thalmann and Kowald2025). On the side of ethics, scholars have raised a plethora of issues from algorithmic bias and fairness-related harms to AI’s potential to undermine trust or human agency (e.g., Fazelpour and Danks Reference Fazelpour and Danks2021; Floridi Reference Floridi2023; Selbst et al. Reference Selbst, Boyd, Friedler, Venkatasubramanian and Vertesi2019).

This paper investigates a specific culture of interdisciplinarity that has gained traction at the intersection of applied AI and ethics: a mode of research that seeks to mitigate social and ethical harms by importing norms, methodologies and governance frameworks from established disciplines, such as the social sciences or medicine into applied AI.Footnote 1 While acknowledging its contributions, I show how this importation presupposes and endorses a framing of applied AI as a domain separate from established disciplines. Yet, such separation is what in the first place allows AI practitioners to operate outside disciplinary norms that have evolved precisely to prevent harms now associated with AI applications. Were AI applications understood as situated firmly within these disciplines, practitioners would already be accountable to their ethical and methodological standards. Paradoxically, this particular culture of interdisciplinarity might thus reinforce a problematic disciplinary isolation of applied AI at the source of the very ethical issues it seeks to mitigate. It can risk treating an ever-growing list of symptoms while playing into their cause.

In response, I outline three paths forward. They differ in the extent to which they view applied AI’s disciplinary status as primarily a matter of framing or institutional reality. Rather than advocating for one approach, I sketch how each path might benefit from closer engagement with scholarship in the social studies of interdisciplinarity. As a former AI practitioner now engaged in the ethics of science and technology, my motivation is to encourage further research at the intersection of applied AI, ethics and the social studies of interdisciplinarity and disciplinarity.

2. A specific culture of interdisciplinarity

Over the past decade, ethical and social issues related to AI have garnered increasing attention. A wide range of research has emerged that attempts to confront these issues – with much of it being of interdisciplinary character. Amongst these efforts, a specific mode of interdisciplinary engagement seeks to mitigate ethical harms by importing norms, methodologies and governance frameworks from established disciplines into applied AI.Footnote 2

This section illustrates and problematizes this culture of interdisciplinarity. It does so in four parts. First, I introduce research on AI fairness in which scholars have tied fairness-related harms to AI practitioners’ tendency to conflate social concepts and their operationalizations. Second, I illustrate this kind of research and its implications in the case of educational AI. Third, I outline how ethically-motivated research at the intersection of social science measurement and AI can paradoxically contribute to some of the ethical issues it seeks to mitigate. Lastly, I show how this research stands as representative for a broader culture of ethically-motivated interdisciplinary scholarship at the intersection of established disciplines and applied AI.

2.1. Introducing interdisciplinary research on AI fairness and construct validity

The particular culture of interdisciplinarity I seek to examine here can be illustrated by prominent work at the intersection of social science measurement and applied AI. Such research often begins by acknowledging that significant ethical issues can arise when AI practitioners insufficiently distinguish between the concept of interest and the particular data their model predicts. Those engaged in this endeavor note that AI practitioners “are often inclined to collapse the distinctions between constructs and their operationalizations, either colloquially or epistemically. […]” (Jacobs and Wallach Reference Jacobs and Wallach2021, p. 10) and that “many times in practice, when machine learning makes claims, it forgets to recognize and communicate that the measurements serving as ‘ground truth’ are a black box that hide problems of measurement, and that the ground truth is not construct itself” (Malik Reference Malik2020, p. 11).Footnote 3 Concretely, this might mean that practitioners fail to distinguish between the concept of depression and the outcome of a particular psychometric test of depression, the concept of socioeconomic status and a specific measurement of wealth, or an employee’s work performance and her ranking in a particular evaluation matrix that an AI model is trained to predict.Footnote 4

Scholars have emphasized the significant ethical stakes involved in this practice, often focusing on how overlooked discrepancies between a target concept and its operationalization can generate fairness-related harms in decision-making contexts. A prominent case in point is the insufficient recognition of the potential incongruities between the normative concept of “care need” – central to healthcare resource allocation – and its possible operational proxy, “care costs.” When such conceptual distinctions are neglected, factors like a patient’s insurance status, which influences billing practices, can distort algorithmic recommendations or policy decisions, ultimately resulting in inequitable care distribution (Jacobs and Wallach Reference Jacobs and Wallach2021, p. 4; Tal, Reference Tal2023). Analogously, underacknowledged differences between poverty and a particular operationalization used by AI-driven poverty prediction (Mussgnug Reference Mussgnug2022) or between teacher effectiveness and a particular metric of it (O’Neil Reference O’Neil2016) can result in fairness-related harms.Footnote 5

In response to these problems, researchers have suggested importing methodological resources and insights around measurement modeling and validation from disciplines in the social sciences into AI research (e.g., Guerdan et al. Reference Guerdan, Holstein, Steven and Wu2022; Jacobs and Wallach Reference Jacobs and Wallach2021; Kleinberg et al. Reference Kleinberg, Lakkaraju, Leskovec, Ludwig and Mullainathan2018). One prominent and influential example of this approach is a 2021 FAccT paper by Abigail Z. Jacobs and Hanna Wallach.Footnote 6

Jacobs and Wallach begin by introducing measurement modeling and measurement validation. Roughly speaking, measurement validation and modeling both concern the relationship between a measurement and a concept of interest – to what extent any particular metric (e.g., a poverty metric) captures relevant features of often-complex and multifaceted social and human concepts (e.g., poverty). Measurement modeling, more specifically, denotes the methods by which theoretical unobservable constructs are linked to observable properties. This process of modeling or “operationalizing” the unobservable theoretical construct in terms of observable properties necessarily involves making assumptions about the nature of the construct and its relationship to observable properties. Such necessary assumptions, however, introduce the potential for mismatches between theoretical understandings and measurements.Footnote 7 Jacobs and Wallach illustrate this by highlighting the assumptions and choices involved in measuring socioeconomic status, teacher effectiveness, recidivism risk, and patient benefit. Measurement validation describes a wide range of methods that can be employed to scrutinize potential mismatches between constructs and their operationalizations. These methods target various different subtypes of validity such as face validity, content validity, convergent validity, discriminant validity, predictive validity, hypothesis validity and consequential validity.

Measurement modeling and validation are commonplace for scholarship in the social and human sciences where researchers have developed targeted and sophisticated methodologies to carefully craft and understand the relationship between theoretical constructs and their operationalizations. In contrast, Jacobs and Wallach note that such methodological resources are generally absent from work on computational systems such as AI applications, leading researchers to insufficiently appreciate the complexities involved in social measurement and the potentially far-reaching fairness-related consequences of mismatches between constructs and operationalizations. The solution, so their paper suggests, lies in importing methodologies such as measurement modeling and validation from the social and human sciences to applied AI.

Jacobs and Wallach are no doubt right to argue that the integration of measurement modeling and validation into AI would constitute an improvement over the status quo and might, indeed, help mitigate some of the related fairness-harms caused by applied AI. My claim in this paper, however, is that this effort and the broader culture of interdisciplinary research it represents can, at the same time, contribute to the fundamental cause of the very ethical issues they try to mitigate. What seems paradoxical becomes clearer when we look at a concrete area of applied AI where suggestions such as those by Jacobs and Wallach have been adopted.

2.2. Educational AI

Early applications of AI methods such as machine learning to education took place under the label of “learning analytics.” In the early 2010s, learning analytics began to position itself as an independent discipline distinct from the more established field of educational science (Siemens Reference Siemens2013). Today, most uses of AI in education fall under the banner of “educational AI.” I suspect that this is partly an act of rebranding. It roughly tracks with how much work on applied machine learning rebranded from “data analytics/data science” to “applied AI” in the mid 2010s, as the emergence of deep learning breathed new life into the, at this point somewhat antiquated and often negatively connotated, label of AI. Granted, the precise relationship between learning analytics and educational AI is likely much more complex (cf. Rienties et al. Reference Rienties, Køhler Simonsen and Herodotou2020). But such complexity should not distract us from the point most important for the purposes of my argument: AI applications in educational and pedagogical settings have commonly been viewed as somewhat outside of established disciplines such as educational science.

In the space of educational AI and learning analytics, we can find precisely what Jacobs and Wallach rightfully problematize: AI applications often conflate concepts with the data being predicted. As Selwyn (Reference Selwyn2019, p. 12) notes, […] many laypeople tend not to treat the data they are presented with as “proxies” or even “indicators.” Instead, any data tends to slip quickly into being understood as a direct measure […].” As a result, the evaluation of a machine learning models is reduced to an assessment of how well it estimates this data, rather than involving more substantive engagement with how data measures (or fails to measure) the concept of interest (e.g., learning). In a study on enrollment, for instance, the authors note that comparison with the predicted data “serves as an empirical way of guaranteeing the accuracy of any ‘scoring’ done by the final data-mining model” (Luan and Zhao Reference Luan and Zhao2006, p. 119). Such conflations of proxies are particularly problematic when the gap between what is measured and what is intended to be measured manifests differentially across demographic groups. Students whose learning behaviors or circumstances diverge from majority patterns used in model development can then face systematic disadvantage and fairness-related harms at the hands of poorly-validated educational AI systems (Baker and Hawn Reference Baker and Hawn2022; Gardner et al. Reference Gardner, Brooks and Baker2019).

In line with Jacobs and Wallach, many critical commentaries have urged such AI applications to import established measurement modeling and validation methodologies from educational science (Gašević et al. Reference Gašević, Dawson and Siemens2015; Knobbout and Van Der Stappen Reference Knobbout and Van Der Stappen2020). To some extent, such calls for methodological transfer have been successful as increasingly many AI applications in education conduct some form of measurement modeling and validation. But often such adoption happens in a rather limited way. Appreciating the potential risks and eager to signal the legitimacy of their discipline, practitioners adopt measurement validation and modeling but often focus primarily on those elements that can easily be simplified into statistical requirements, while ignoring more contextual, theoretical, or normative aspects of measurement (cf. Winne Reference Winne2020; see also Alexandrova and Haybron Reference Alexandrova and Haybron2016).

This dynamic stands representative for a pattern characteristic in the transfer of existing disciplinary methodologies, ethics, or standards into applied AI. Disciplinary norms and methods are (i) imported to address an ethical or epistemic shortcoming but also to signal the field’s legitimacy, (ii) transformed into narrow technical requirements, and (iii) ultimately operate independently of the broader methodological, disciplinary, and professional structures that originally supported them. In the case of educational AI, this meant that validity was often reduced to statistical tests and those subkinds of validity amenable to quantitative analysis. At the same time, practitioners often continued to sidestep broader theoretical and cultural complexities surrounding concepts such as learning or education, and complications arising from how education and learning are shaped by highly contextual, social, or individualistic features. However, even statistically validated measurements can be ineffective and unethical if they are employed in ways that ignore the social, institutional or personal features of an educational context, are embedded in opaque workflows, used for summative control, encourage student surveillance, or are deployed without continuous monitoring (cf. Ferguson and Clow Reference Ferguson and Clow2017).

Importing measurement validation methodologies from educational science to applied AI also does little to address other, often more fundamental, ways in which educational AI can conflict with ethical values, standards, pedagogical principles, methodological norms and best practices common to established research in the educational domain. Central to pedagogical work and research in educational science are principles not only such as informed consent, equity, and privacy, but also student autonomy, teacher discretion, and an emphasis of the highly complex, nuanced, socially-situated nature of learning. The very act of quantification and classification central to educational AI and the ways in which applications are marketed, however, can grossly underestimate the complexities of learning, encourage performativity and introduce threats of digital surveillance (Knox et al. Reference Knox, Williamson and Bayne2020). Not only the lack of measurement validation but various other characteristics of educational AI can thus stand in direct conflict with the norms, methods, values and best practices that educational science has cultivated – often through a slow and painful process of learning from past mistakes. This point is not limited to discriminative AI but extends to how work on generative AI in education can stand in opposition to various established pedagogical principles and values.

This leads us to a more general insight: Many ethical issues surrounding educational AI arise precisely because of the ways in which such work is regarded as external to the existing normative and methodological frameworks of pedagogical practice and educational science. Research that seeks to translate existing methods, best practices, or ethical standards from educational science and pedagogical practice to educational AI rests upon this very disciplinary separation. What is more, the (sometimes rather superficial) adoption can even serve to legitimize educational AI’s status, further entrenching this disciplinary arrangement.

2.3. Critiquing interdisciplinary research on AI fairness and measurement validation

Having briefly outlined the case of educational AI, the broader issue can come into focus. I argue that work on measurement validation and AI can rest upon and buy into a cause of the very harms it aims to alleviate: an understanding or pursuit of AI applications as belonging to a distinct field of inquiry, external to existing disciplines. It presupposes and reinforces a problematic separation: that AI applications, even when deployed for purposes common in educational, healthcare, economics or public policy, operate outside the normative and methodological frameworks of those disciplines. The very need to reintroduce disciplinary methodologies, best practices and ethical frameworks arises only as AI is framed or pursued as autonomous from those normative and epistemic infrastructures. Yet research on fair AI and social measurement often fails to critically interrogate the disciplinary assumptions and demarcations that structure its own practice. The result is a paradox. The attempt to mitigate fairness-related harms relies on a conception of applied AI that endorses the very condition that gives rise to many of those and countless other harms.

This point is clarified when considering the contrasting approach. Suppose we emphatically identify predictive AI applications discussed in these research papers as novel tools for classification and quantification firmly within disciplines, such as psychology, education, medicine or political science (cf. Mussgnug Reference Mussgnug2022). In this case, practitioners would already be accountable to the particular domain-specific methodological norms such as different practices of measurement modeling and validation that have developed in distinctive ways across social-science disciplines in order to foster socially responsible and reliable research.

Much work would still need to be done in order to better understand how measurement validation and other methodologies must be adjusted to the particularities of AI applications. At the same time, this approach eliminates the need to import hitherto foreign methodological resources from disciplines in the social sciences to the (purported) realm of applied AI. In this framing, the requirement to engage in validation or other domain-specific measurement practices follows as an internal disciplinary obligation, not an optional enhancement. This highlights how the need for importing measurement validation and modeling only ever emerges as a result of pursuing or framing applied AI as a domain external to existing social science disciplines and their methodological norms.

2.4. Friend or foe to ethical AI?

Work on AI fairness and measurement validation is only one example of a broader culture of interdisciplinarity that has gained traction at the intersection of applied AI and ethics: a mode of research that seeks to mitigate social and ethical harms by importing norms, methodologies and governance frameworks from established disciplines, such as the social sciences or medicine into applied AI. For instance, some have turned toward fields such as medicine to identify forms of ethical and regulatory governance for emerging AI applications. AI principalism, for instance, adapts an ethical framework based on high-level principles from biomedical ethics (e.g., Beauchamp & Childress, Reference Beauchamp and Childress2013) to AI governance (cf. Mittelstadt Reference Mittelstadt2019). Endorsing an “FDA for AI,” some have argued for adapting frameworks for medical regulation and licensing to AI systems (Carpenter & Ezell, Reference Carpenter and Ezell2024). Yet others have suggested translating principles of justice from the realm of public institutions to AI applications (Westerstrand Reference Westerstrand2024).

Such interdisciplinary research approaches applied AI as its own domain, a realm of practice distinct and separate from disciplines such as medicine, finance, psychology, law, or education. This understanding of applied AI as its own autonomous domain externalizes existing disciplinary norms, methodologies, theoretical insights, governance practices cultivated within those fields. It casts applied AI as somewhat of a blank slate, ethically and epistemically uncharted territory, a terra nullius (Mussgnug Reference Mussgnug2025). The culture of interdisciplinary AI ethics discussed here is keenly aware of the problems caused by this normative vacuum and the relevance of existing disciplinary resources, which it seeks to re-introduce. In doing so, however, this culture of interdisciplinarity can simultaneously sustain and contribute to the very disciplinary independence of applied AI that causes the very problems it attempts to solve. It risks treating an ever-growing list of symptoms while playing into their cause.

3. Three paths forward

How should we respond to this issue? As I see it, there are three options available – which depend, in part, on the extent to which one considers this a problem that arises from a certain framing of applied AI research or one that stems from an institutional and sociological disciplinary reality. First, we might hold that the problem lies in a problematic depiction of applied AI research as a distinct discipline. Instead of presenting the kind of research discussed as an interdisciplinary translational exercise, we would reframe it as an act of holding AI applications accountable to the epistemic and ethical frameworks governing the disciplinary fields within which they operate. Second, we might maintain that the organization of applied AI research and self-understanding of practitioners is such that we must acknowledge applied AI as a distinct discipline but that the problems arising from this disciplinary autonomy are so grave as to warrant reformation of this status quo. For reasons which I outline below, I believe that both options mentioned so far converge on the same actionable insight. Lastly, we might hold that the benefits offered by the pursuit of applied AI as a distinct discipline outweigh its ethical downsides. Even in this case, however, we can still encourage greater critical engagement with the complications raised in this paper.

The goal of this section is not to settle this debate. Rather I seek to emphasize that in exploring this solution space, work promoting ethical AI might benefit from closer engagement with scholarship in the social studies of interdisciplinarity and disciplinarity. To this end, I contextualize the three options outlined below within this scholarship. As a former AI practitioner now engaged in the normative philosophy of science and technology, I do not aspire for my depiction of the social studies of (inter)disciplinarity to be authoritative or comprehensive. I merely hope to do enough to establish the importance of these considerations for the issue this paper raised and to encourage further research at the intersection of AI ethics and the social studies of interdisciplinarity.

3.1. Option 1: Pseudo-interdisciplinarity and a different framing

Our first option is to simply deny that the application of AI in science forms part of a distinct discipline of applied AI. Ethically motivated research efforts presented as interdisciplinary, are then actually a particular form of pseudo-interdisciplinarity. Heinz Heckhausen (Reference Heckhausen1972, p. 87) originally characterizes pseudo-interdisciplinarity as “the erroneous proposition that sharing analytical tools such as mathematical models of computer simulation constitutes ‘intrinsic’ interdisciplinarity.” The research presented here, however, might constitute a fascinating second case of pseudo-interdisciplinarity, where a seeming interdisciplinarity emerges from the questionable framing of certain research as a distinct discipline. In other words, it presents a case where pseudo-interdisciplinarity arises from the pseudo-disciplinarity of applied AI.

Going down this route would not deny the existence of a discipline which is concerned with the study of AI. Detailed historical research and current institutional realities makes the case for the emergence of “artificial intelligence” as a discipline, distinctively combining influences from fields, such as cognitive science, computer science and philosophy.Footnote 8 Reframing Apostel and coauthors (Reference Apostel, Berger, Briggs and Michaud1972, p. 9), AI is one of many cases where “the ‘inter-discipline’ of yesterday is the ‘discipline’ of today.” But accepting the disciplinary status of artificial intelligence would not commit us to understanding work in applied AI as part of this discipline. A heuristic can help clarify this complex picture (and, so I believe, can be instructive for research on the interdisciplinarity and ethics of AI more broadly).

The term “artificial intelligence” denotes both a task and a tool. As a task, AI is a broad project engaged in understanding and working toward (forms of) machine intelligence. It is the kind of project laid out most famously in the proposal for the Dartmouth summer workshop, in which McCarthy et al. (1955/Reference McCarthy, Minsky, Rochester and Shannon2006, p. 2) propose work “on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” As a tool, AI denotes a resource that can be employed, for instance, in the service of constructing computational predictive models within diverse areas of application.

In applied AI, AI denotes primarily as a tool. It focuses on implementing artificial intelligence solutions to solve real-world problems. Clearly, such work in applied AI is both informed by and feeds back into the broader project of constructing and understanding AI (AI as a task). Nonetheless, it is distinct from it. And we can acknowledge the discipline of artificial intelligence, while holding that concrete work in applying AI to issues, such as protein structure prediction (Jumper et al. Reference Jumper, Evans, Pritzel, Green, Figurnov, Ronneberger, Tunyasuvunakool, Bates, Žídek, Potapenko, Bridgland, Meyer, Kohl, Ballard, Cowie, Romera-Paredes, Nikolov, Jain and Adler2021), poverty estimation (Aiken et al. Reference Aiken, Bellue, Karlan, Udry and Blumenstock2022), personalized learning (Chen Reference Chen2025) or depression screening (Priya et al. Reference Priya, Garg and Tigga2020), is best understood as the use of a novel tool within existing disciplinary contexts rather than its own distinct disciplinary endeavor.

An analogy between AI and computers can help further elucidate this picture. Today, computer science is its own discipline with institutional structures, professional organizations, and a shared research goal. Very roughly, computer science is concerned with the study and development of computational systems. Involved in it is both the theoretical study of computation and the development of concrete hardware and software systems.Footnote 9 But what about cases where we use a computer to instantiate an agent-based simulation of an economic market or to construct a complex statistical model of a social psychological phenomenon? I suspect that it would be far-fetched to consider these as instances of computer science today.Footnote 10 Instead, I suspect that we are inclined to consider them as the use of a tool (a particular computer hardware and software) firmly within fields such as biology or economics. One could argue that AI models are much like computers: While there is a study concerned with better understanding artificial intelligence or developing novel formal AI architectures, it would be questionable to include in this field the mere application of AI tools.Footnote 11

Rejecting applied AI’s status as a distinct discipline reveals the culture of interdisciplinarity examined here as a form of pseudo-interdisciplinarity – one whose very structure may counteract the ethical objectives it aspires to achieve. Acknowledging this does not entail abandoning efforts to highlight the importance of methodologies such as measurement validation, concepts such as measurement accuracy, or existing best practices and regulatory frameworks. Instead, it puts into focus an alternative framing: rather than pursuing the transplantation of disciplinary norms and resources into an ostensibly autonomous field of applied AI, we could prioritize the responsible integration of AI tools within existing disciplinary infrastructures.Footnote 12 This approach, I suspect, might offer a promising pathway toward realizing the ethical and social aspirations of such research.

Distinct disciplines have cultivated diverse sets of methodological and conceptual resources, norms, best practices, and governance and accountability frameworks. They have developed these resources in response to the particular ethical and epistemic challenges present in their research and as lessons learned from past failures. Among countless others, social scientists have established forms of concept validation to anticipate ethically-relevant mismatches between concepts and operationalizations, psychiatry has developed methodologies such as differential diagnosis in order to minimize the risk of misdiagnosis and associated harms (Frances Reference Frances2013), and medicine has adopted strict norms around privacy and informed consent in order to protect vulnerable patient communities (Belmont Report, 1979). For all justified criticism regarding disciplinary rigidity, it must also be acknowledged that these distinct disciplinary features also represent a certain historically cultivated wisdom, accumulated domain experience, and attentiveness to contextual complications (cf. Nissenbaum Reference Nissenbaum2009).

If AI applications were to be more firmly understood as novel elements within existing domains of practice, AI practitioners would already be accountable to their methodologies, norms, virtues, best practices and regulations. Conversely, as interdisciplinary research frames applied AI as a distinct domain, these contextual norms are external to AI practice. As such, they require separate justification and AI practitioners might easily interpret adherence to them as an optional effort rather than an inherent obligation – a problem that already shows in limited compliance with imported methodologies and ethical frameworks (cf. Hagendorff Reference Hagendorff2020; Mittelstadt Reference Mittelstadt2019; Munn Reference Munn2023).

3.2. Option 2: Challenging existing disciplinary structures

It might be argued that a mere reframing misses the point. The disciplinarity of applied AI, so one might argue, stems from an institutional and sociological reality. On this reading, the organization of research and self-understanding of researchers is such that applied AI is indeed a distinct discipline. Nonetheless, I believe that we might still have reason and room to articulate a counter-imaginary to those disciplinary realities in light of the problems they pose to ethical AI. Such a proposal might seem far-fetched. However, work in the social studies of science reveals to us that emerging disciplines are often much more contingent than ordinarily assumed.

As Andrew Barry and Georgina Born note (Reference Barry, Born, Barry and Born2013, p. 8) “disciplinary boundaries are neither entirely fixed nor fluid; rather, they are relational and in formation.” Thereby Barry and Born echo influential work by Timothy Lenoir. Lenoir (Reference Lenoir1997) opposes a naïve conception of scientific knowledge as disinterested and a resulting view of scientific disciplines as forming independently and quasi-necessarily around their subject matter. Instead, Lenoir explicates how disciplines emerge within cultural, political, social and economic struggle. Disciplines are political structures. They mediate between the realm of scientific research and broader cultural, economic, and social forces.Footnote 13 As he puts it: “they are creatures of history reflecting human habits and preferences rather than a fixed order of nature” (Lenoir Reference Lenoir1997, p. 58).

Applied AI, to the extent that it constitutes an independent discipline, is no exception. Its emergence as a discipline was equally shaped in negotiation with complex political, social and economic dynamics. A comprehensive account or even a sketch of this development goes beyond the scope of this paper (and, quite frankly, my expertise). Instead, I seek to focus on how the disciplinary independence of applied AI can tie into broader economic motives specifically through the ways in which it circumvents existing domain-specific norms.

If applied AI denotes a novel autonomous discipline, practitioners are (at least prima facie) freed from engaging with the often-extensive methodological, theoretical, ethical and regulatory norms that govern existing disciplines. And this serves well both the ideological environment and economic motives surrounding AI applications. In 2020, industry developed 96% of the biggest AI models. As of 2020, 70% of PhD students went to industry (a percentage that has likely increased by now) and the hiring of academic faculty working on AI into industry has witnessed an eightfold increase between 2006 and 2020 (Ahmed et al. Reference Ahmed, Wahed and Thompson2023). Closely tied to commercial interests, AI applications are often associated with themes of “disruption” or with the motivation “to move fast and break things” in efforts to rush products to market (Cook Reference Cook2020; Daub Reference Daub2020; Zuckerberg Reference Zuckerberg2012). Ignorance of the many resources, standards, and norms that existing disciplines have cultivated becomes instrumental to these ends.

Work in the social studies of science further stresses how disciplines are sustained through shared institutions, culture and practice. Part of this is so-called “boundary work” – practices that establish and maintain disciplinary identity and distinctiveness (Gieryn Reference Gieryn1999). The concept of boundary work enables us to theorize what became evident in the previous section. Since the sometimes superficial and selective import of existing methodologies, ethical frameworks, or best practices can serve to signal applied AI’s (and its subfields’) disciplinary legitimacy, it constitutes boundary work.

Acknowledging both the contingent nature of discipline formation and the importance of boundary work in sustaining disciplinary identity, lends credence to two insights. First, how the disciplinarity of applied AI might have also emerged precisely to circumvent existing ethical and epistemic norms in service of specific economic interests. And, second, how challenging a certain framing of ethically motivated scholarship on applied AI as translating epistemic and ethical frameworks from established disciplines might itself challenge the disciplinarity of applied AI.Footnote 14 Option one and two thus converge on how to respond to the problem outlined in Section 2. We are asked to reject the framing of such research as translational or interdisciplinary in light of its role in legitimizing the disciplinary distinctiveness of applied AI – whether we take this disciplinary status to be contestable or only objectionable.

3.3. Option 3: Exploring different modes of interdisciplinary research

A third option would be to accept the disciplinary status of applied AI (and its subdisciplines), holding that it offers benefits that outweigh the ethical drawbacks outlined in this paper. For instance, the organization of applied AI in a distinct discipline might allow practitioners to better leverage economic opportunities, share transferable insights across fields of AI application, or devise overarching ethical frameworks that address issues arising from the technical specificities of AI tools (e.g., opacity).

However, nothing thereby prevents us from more critical engagement with the drawbacks that such disciplinary independence brings and with how well-intended mitigation measures can paradoxically reinforce these issues. Even when granting applied AI disciplinary status, we might still explore alternative, more ethically beneficial, ways of pursuing interdisciplinary research. To this end, work in the social studies of science can again serve as important resource for both characterizing the precise nature of the culture of interdisciplinary research problematized here and offering visions for alternative interdisciplinary engagement.

The existing taxonomy around the genus of interdisciplinarity is both rich and complex. At the most general level, one can distinguish between multi-, inter- and transdisciplinarity. Multidisciplinarity merely involves the juxtaposition or cooperation of disciplines, which continue to operate firmly within established disciplinary framings (Barry and Born Reference Barry, Born, Barry and Born2013, p. 9). In contrast, interdisciplinarity involves an effort to proactively integrate or synthesize perspectives and methodologies from distinct disciplines (Barry and Born Reference Barry, Born, Barry and Born2013, p. 8). Characterizations of transdisciplinarity are somewhat more contested but generally denote a more radical attempt to transcend disciplinary categorizations, as well as strict demarcations between scientific experts and non-academic stakeholders (Pathways to Sustainability, n.d.).

The culture of ethically motivated research at the intersection of applied AI and established disciplines explored in this paper presents itself as pursuing the proactive integration of methodological, moral, and conceptual resources from one set of disciplines (psychology, education, political science, etc.) to applied AI. As such it presents a case of interdisciplinary research. Interdisciplinarity (ID) can be further distinguished along multiple dimensions. For instance, one can differentiate between methodological ID and theoretical ID (Bruun et al. Reference Bruun, Hukkinen, Huutoniemi and Klein2005). As Klein (Reference Klein, Frodeman, Klein and Mitcham2010, p. 19) notes: “The typical motivation in Methodological ID is to improve the quality of results. The typical activity is borrowing a method or concept from another discipline […].” In contrast, theoretical ID “connotes a more comprehensive general view and epistemological form. The outcomes include conceptual frameworks for analysis of particular problems, integration of propositions across disciplines, and new syntheses based on continuities between models and analogies” (Klein Reference Klein, Frodeman, Klein and Mitcham2010, p. 20). Whereas methodological ID is exclusively instrumental and often somewhat opportunistic, theoretical ID can be critical, involving the interrogation and transformation of existing disciplinary knowledge structures (Klein Reference Klein, Frodeman, Klein and Mitcham2010, p. 23). Within these categories, we can identify the culture of interdisciplinarity problematized in this paper as an instance of instrumental and somewhat opportunistic methodological ID.

Another distinction is made by Andrew Barry and Georgina Born (2008), who classify ID along integrative-synthesis (the integration and synthesis of disciplinary components), subordination-service (where one discipline is subordinated or put into service of another), and agonistic–antagonistic modes (emerging from an oppositional relationship to existing disciplinary knowledge practices). Barry and Born also characterize three distinct logics that can motivate interdisciplinary research: a logic of accountability responding to societal demands, a logic of innovation responding to economic opportunities, and an ontological logic which is motivated by a more fundamental reconceptualization of the objects of research.

When it comes to research at the intersection of applied AI and established disciplines, established disciplines are subordinated in a service-role. Within what Barry and Born label a logic of accountability, disciplinary resources are imported to mitigate ethical harms caused by AI applications and to respond to social and political demands for responsible AI practice. As shown, however, this culture of interdisciplinary research can sometimes result in only superficial adoption and the reframing of complex epistemic and ethical conventions into oversimplified technical engineering problems. In doing so, imported resources are stripped of their context-specificity, their theoretical grounding, and of their embeddedness into broader epistemic and ethical practice. This represents what critical voices have identified as a limited additive approach characterized by only superficial engagement of machine learning and AI research with existing disciplines, particularly in the social sciences and humanities. Absent engagement with broader supporting factors and with their often nuanced application, imported resources can easily fail to have the ethically-beneficial impact scholars are aiming for.

Here scholarship in the social studies of science on the different kinds of interdisciplinarity can provide us with concrete case studies and theoretical analysis on different forms of interdisciplinary research. Even when we insist on the net-benefit of applied AI’s disciplinary independence, these resources might help identify and pursue modes of interdisciplinary research that better serve the intended goal of more ethical practice and mitigate some of the drawbacks highlighted in this paper.

4. Conclusion

This paper sought to problematize a certain culture of interdisciplinarity at the intersection of applied AI and ethics.Footnote 15 While this line of research rightly emphasizes the importance of existing methodologies and norms, I have called for more critical engagement with how this particular culture of interdisciplinarity can legitimize the notion of applied AI as a distinct field of research – absent any acknowledgment of how the disciplinary independence of applied AI can give rise to some of the very ethical problems it seeks to address. I take raising this problem to be the primary contribution of my paper.

In response, I have outlined three paths forward. My examination hopes to stimulate further meta-level reflection on how assumptions and realities of applied AI’s disciplinarity shape the pursuit of interdisciplinarity in ethical AI. First, we might hold that the problem lies in a problematic framing of applied AI research as a distinct discipline. Consequently, we would call for the kind of research discussed to be appropriately presented not as interdisciplinary but as an act of holding AI applications accountable to the epistemic and ethical frameworks governing the disciplinary fields within which they operate. Second, we might hold that the organization of applied AI research and self-understanding of practitioners is such that we must acknowledge applied AI as a distinct discipline but that the problems arising from this disciplinary autonomy are so grave as to warrant reformation of this status quo. Lastly, we might hold that the distinct benefits offered by the pursuit of applied AI as a distinct discipline outweigh its ethical downsides, while still encouraging greater critical engagement with the complications raised in this paper.

I did not argue for one of these options. Instead, my brief and high-level discussion sought to underscore how research on interdisciplinarity in the social studies of science can serve as important resource for navigating either of these paths forward. In doing so, I hope to stimulate more substantial collaborations in exploring how different cultures of interdisciplinarity relate to the pursuit of more ethical AI.

Acknowledgements

I would like to thank Elisa Cardamone and Meghan M. Shea for their feedback on earlier versions of this article. I also thank two anonymous reviewers for their exceptionally productive comments.

Funding statement

Work on this paper was supported by a Baillie Gifford Studentship in AI ethics at the Centre for Technomoral Futures, University of Edinburgh and an Interdisciplinary Ethics Fellowship at Stanford’s McCoy Family Center for Ethics in Society and Apple University.

Competing Interests

The author declares no competing interests.

Footnotes

1 My claim is not that epistemic dimensions are not central to this project nor that more ethical AI often equals epistemically better AI, but rather that the primary motivation for such interdisciplinary research is often moral. This might involve addressing epistemic challenges causing ethical harm by importing methodological standards (see Section 2.1), as well as addressing fundamentally moral shortcomings by importing normative or regulatory frameworks (see Section 2.4).

2 My use of the term AI intends to track its popular, often generous, use in scientific research and beyond. Most centrally this involves machine learning applications but can also refer to complex applications of more traditional computational methods and other technologies. I acknowledge that this might leave the concept somewhat underspecified. Since the purpose of my paper is also to engage in an ongoing debate, I believe it is best to align with the vocabulary of this debate rather than imposing clear conceptual boundaries on a historically and currently rather fuzzy category.

4 While the term concept is more popular in disciplines such as philosophy, construct is more commonly used in, among others, psychology. Despite (contested) differences in their meaning, I use these terms interchangeably.

5 Relatedly, scholars have explored issues surrounding equity and justice in light of how the concept “good behavior” is operationalized in China’s social credit system (Engelmann et al. Reference Engelmann, Chen, Fischer, Kao and Grossklags2019).

6 FAccT (ACM Conference on Fairness, Accountability and Transparency) is the major conferences on fairness and computational systems. In their paper, Jacobs and Wallach propose to use measurement modeling and construct validation both as a methodological resource for AI practitioners and to better understand debates within AI fairness itself. Here, I focus only on the former.

7 Importantly, such mismatches do not only arise because assumptions are “wrong” in any simplistic manner but often also because relevant theoretical construct are often highly contextual, with different aspects mattering for different decision-making scenarios.

8 A detailed account is unfortunately beyond the scope of this paper, but can be found in works such as (Galanos Reference Galanos2023) and (Wooldridge Reference Wooldridge2021).

9 For more substantial readings on the history and emergence of computer science as a discipline, see (Ensmenger Reference Ensmenger2012), (Nofre Reference Nofre2023), or (Tedre Reference Tedre2014).

10 I believe the comparison of applied AI with computer science to be particularly instructive since both are general purpose technologies. As a result dynamics likely differ from other more sector-specific instrumentation (Lenoir Reference Lenoir1997, Chapter 9).

11 Note that I use AI tools to refer to generic AI model and learning architectures and, thus, do not consider the mere training on particular datasets as the development of fundamentally new AI tools. In other words, my analogy relies on the claim that, developing novel forms of learning architectures is akin to the development of traditional computer software, whereas the training of existing ML on new data is much like the use of existing computer software.

12 A parallel argument can be presented that applied AI might be fundamentally interdisciplinary to the extent that such would involve a meaningful degree of accountability to established disciplinary norms and methodologies.

13 Lenoir’s work resonates with a broad range of scholarship that illustrates how the emergence of disciplines and the construction of social and symbolic disciplinary boundaries is shaped by a diverse set of factors: from the existence of shared problems, understandings, and methods to social, political or financial interests, affordances and imaginaries (Gieryn Reference Gieryn1983; Jasanoff Reference Jasanoff2013; Meier and Dudley Reference Meier and Dudley1984; Sapp Reference Sapp2003; Tedre Reference Tedre2014).

14 Challenging the disciplinarity of AI must further involve pursuing alternatives to existing disciplinary structures and practices through institutional, pedagogical and social interventions. This might include incentives for the publication of applied AI work within existing disciplinary journals, greater technical training of domain experts, or changes in departmental politics.

15 While my focus lied on a particular culture of interdisciplinarity, my arguments also bear relevance for emerging work on AI as a means to or locus wherein to transcend existing disciplinary divisions and structures (e.g., Cao Reference Cao2023).

References

Ahmed, Nur, Wahed, Muntasir, and Thompson, Neil C.. 2023. “The Growing Influence of Industry in AI Research.” Science 379 (6635): 884886. https://doi.org/10.1126/science.ade2420.CrossRefGoogle ScholarPubMed
Aiken, Emily, Bellue, Suzanne, Karlan, Dean, Udry, Chris, and Blumenstock, Joshua E.. 2022. “Machine Learning and Phone Data Can Improve Targeting of Humanitarian Aid.” Nature 603 (7903): Article7903. https://doi.org/10.1038/s41586-022-04484-9.CrossRefGoogle ScholarPubMed
Alexandrova, Anna, and Haybron, Daniel M.. 2016. “Is Construct Validation Valid?Philosophy of Science 83(5): Article 5. https://doi.org/10.1086/687941.CrossRefGoogle Scholar
Apostel, Leo, Berger, Guy, Briggs, Asa, and Michaud, Guy. 1972. Interdisciplinarity; Problems of Teaching and Research in Universities. Paris, France: Organisation for Economic Co-operation and Development.Google Scholar
Baker, Ryan S. and Hawn, Aaron. 2022. “Algorithmic Bias in Education.” International Journal of Artificial Intelligence in Education 32 (4): 10521092. https://doi.org/10.1007/s40593-021-00285-9.CrossRefGoogle Scholar
Barry, Andrew, and Born, Georgina. 2013. “Reconfigurations of the Social and Natural Sciences.” In Interdisciplinarity: Reconfigurations of the Social and Natural Sciences, edited by Barry, : Andrew and Born, Georgina, 1, London: Routledge.CrossRefGoogle Scholar
Barry, Andrew, Born, Georgina, and Weszkalnys, Gisa. 2008. “Logics of Interdisciplinarity.” Economy and Society 37 (1): 2049. https://doi.org/10.1080/03085140701760841.CrossRefGoogle Scholar
Beauchamp, Tom L., and Childress, James F.. 2013. Principles of Biomedical Ethics. 7th ed. New York : Oxford University.Google Scholar
Bruun, Hans, Hukkinen, Janne I., Huutoniemi, Katariina I., and Klein, Julie Thompson. 2005. Promoting Interdisciplinary Research: The Case of the Academy of Finland. Helsinki, Finland: Academy of Finland.Google Scholar
Cao, Longbing 2023. “Trans-AI/DS: Transformative, Transdisciplinary and Translational Artificial Intelligence and Data Science.” International Journal of Data Science and Analytics 15 (2): 119132. https://doi.org/10.1007/s41060-023-00384-x.CrossRefGoogle Scholar
Carpenter, Daniel, and Ezell, Carson. 2024. ‘An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence’. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October): 239–54. https://doi.org/10.1609/aies.v7i1.31633.CrossRefGoogle Scholar
Chen, Li (2025). Research on AI-driven Personalized Learning Path Planning and Effectiveness under Dual-system Teaching Mode. Proceedings of the 2nd Guangdong-Hong Kong-Macao Greater Bay Area Education Digitalization and Computer Science International Conference, 16. https://doi.org/10.1145/3746469.3746471CrossRefGoogle Scholar
Cook, Katy 2020. The Psychology of Silicon Valley: Ethical Threats and Emotional Unintelligence in the Tech Industry. Cham, Switzerland: Palgrave Macmillan. https://doi.org/10.1007/978-3-030-27364-4.CrossRefGoogle Scholar
Daub, Adrian 2020. What Tech Calls Thinking: An Inquiry into the Intellectual Bedrock of Silicon Valley. NYC, USA: Farrar, Straus and Giroux.Google Scholar
Engelmann, Stefan, Chen, Mengchen, Fischer, Florian, Kao, Ching-yu, and Grossklags, Jens 2019. “Clear Sanctions, Vague Rewards: How China’s Social Credit System Currently Defines ‘Good’ and ‘Bad’ Behavior”. Proceedings of the Conference on Fairness, Accountability, and Transparency, 6978. https://doi.org/10.1145/3287560.3287585CrossRefGoogle Scholar
Ensmenger, Nathan L. 2012. The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise. Cambridge, MA: MIT Press.Google Scholar
Fazelpour, Siavash, and Danks, David. 2021. “Algorithmic Bias: Senses, Sources, Solutions.” Philosophy Compass 16 (8): e12760. https://doi.org/10.1111/phc3.12760.CrossRefGoogle Scholar
Ferguson, Rebecca, and Clow, Doug (2017). “Where Is the Evidence? A Call to Action for Learning Analytics”. Proceedings of the Seventh International Learning Analytics & Knowledge Conference, 5665. https://doi.org/10.1145/3027385.3027396CrossRefGoogle Scholar
Floridi, Luciano 2023. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford, UK: Oxford University Press.CrossRefGoogle Scholar
Frances, Allen 2013. Essentials of Psychiatric Diagnosis, Revised Edition: Responding to the Challenge of DSM-5 ? NYC, USA: Guilford Publications.Google Scholar
Galanos, Vassilis 2023. “Expectations and Expertise in Artificial Intelligence: Specialist Views and Historical Perspectives on Conceptualisation, Promise, and Funding.” https://doi.org/10.7488/era/3188.CrossRefGoogle Scholar
Gardner, Josh, Brooks, Christopher, and Baker, Ryan S. (2019). Evaluating the Fairness of Predictive Student Models through Slicing Analysis. Proceedings of the 9th International Conference on Learning Analytics & Knowledge, 225234. https://doi.org/10.1145/3303772.3303791CrossRefGoogle Scholar
Gašević, Dragan, Dawson, Shane, and Siemens, George. 2015. “Let’s Not Forget: Learning Analytics are about Learning.” TechTrends 59 (1): 6471. https://doi.org/10.1007/s11528-014-0822-x.CrossRefGoogle Scholar
Gieryn, Thomas F. 1983. “Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists.” American Sociological Review 48 (6): 781795. https://doi.org/10.2307/2095325.CrossRefGoogle Scholar
Gieryn, Thomas F. 1999. Cultural Boundaries of Science: Credibility on the Line. Chicago, IL, USA: University of Chicago Press.Google Scholar
Guerdan, Lindsay, Holstein, Kenneth, Steven, Zachary, and Wu, Shuran (2022). Under-reliance or Misalignment? How Proxy Outcomes Limit Measurement of Appropriate Reliance in AI-assisted Decision-making. https://www.semanticscholar.org/paper/Under-reliance-or-misalignment-How-proxy-outcomes-Guerdan-Holstein/317a02a572a450af28352869ffae3d8df0104abdGoogle Scholar
Hagendorff, Thilo 2020. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds and Machines 30 (1): 99120. https://doi.org/10.1007/s11023-020-09517-8.CrossRefGoogle Scholar
Heckhausen, Heinz 1972. “Discipline and Interdisciplinarity”. In Interdisciplinarity: Problems of Teaching and Research in Universities, OECD, Washington DC: OECD Publications Center, Suite 1207, 1750 Pennsylvania Avenue, N., 8089. https://eric.ed.gov/?id=ED061895Google Scholar
Herrmann, Michael, Lange, Florian J. D., Eggensperger, Katharina, Casalicchio, Giuseppe, Wever, Matthias, Feurer, Matthias, Rügamer, Daniel, Hüllermeier, Eyke, Boulesteix, Anne-Laure, and Bischl, Bernd (2024a). Position: Why We Must Rethink Empirical Research in Machine Learning (No. arXiv:2405.02200). arXiv. https://doi.org/10.48550/arXiv.2405.02200CrossRefGoogle Scholar
Herrmann, Michael, Lange, Florian J. D., Eggensperger, Katharina, Casalicchio, Giuseppe, Wever, Matthias, Feurer, Matthias, Rügamer, Daniel, Hüllermeier, Eyke, Boulesteix, Anne-Laure, and Bischl, Bernd (2024b). Position: Why We Must Rethink Empirical Research in Machine Learning (No. arXiv:2405.02200). arXiv. https://doi.org/10.48550/arXiv.2405.02200CrossRefGoogle Scholar
Hutson, Matthew 2018. “Artificial Intelligence Faces Reproducibility Crisis.” Science 359(6377): Article 6377. https://doi.org/10.1126/science.359.6377.725.CrossRefGoogle ScholarPubMed
Jacobs, Abigail Z., and Wallach, Hanna (2021). “Measurement and Fairness”. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 375385. https://doi.org/10.1145/3442188.3445901CrossRefGoogle Scholar
Jasanoff, Sheila 2013. Fields and Fallows: A Political History of STS. In Interdisciplinarity: Reconfigurations of the Social and Natural Sciences. London: Routledge.Google Scholar
Jumper, John, Evans, Richard, Pritzel, Alexander, Green, Tim, Figurnov, Michael, Ronneberger, Olaf, Tunyasuvunakool, Kathryn, Bates, Russ, Žídek, Augustin, Potapenko, Alex, Bridgland, Andrew, Meyer, Charlie, Kohl, Simon A. A., Ballard, Andrew J., Cowie, Adrian, Romera-Paredes, Bernardo, Nikolov, Sergey, Jain, Rishabh, Adler, Jon et al. 2021. “Highly Accurate Protein Structure Prediction with AlphaFold.” Nature 596(7873): Article 7873. https://doi.org/10.1038/s41586-021-03819-2.CrossRefGoogle ScholarPubMed
Klein, Julie Thompson 2010. “A Taxonomy of Interdisciplinarity.” In The Oxford Handbook of Interdisciplinarity, edited by Frodeman, Robert, Klein, Julie Thompson and Mitcham, Carl. Oxford UK: Oxford University Press.Google Scholar
Kleinberg, Jon, Lakkaraju, Himabindu, Leskovec, Jure, Ludwig, Jens, and Mullainathan, Sendhil. 2018. “Human Decisions and Machine Predictions*.” The Quarterly Journal of Economics 133 (1): 237293. https://doi.org/10.1093/qje/qjx032.CrossRefGoogle ScholarPubMed
Knobbout, Jeroen, and Van Der Stappen, Eric. 2020. “Where Is the Learning in Learning Analytics? A Systematic Literature Review on the Operationalization of Learning-Related Constructs in the Evaluation of Learning Analytics Interventions.” IEEE Transactions on Learning Technologies 13 (3): 631645. https://doi.org/10.1109/TLT.2020.2999970.CrossRefGoogle Scholar
Knox, Jeremy, Williamson, Ben, and Bayne, Sian. 2020. “Machine Behaviourism: Future Visions of ‘learnification’ and ‘datafication’ across Humans and Digital Technologies.” Learning, Media and Technology 45 (1): 3145. https://doi.org/10.1080/17439884.2019.1623251.CrossRefGoogle Scholar
Lenoir, Timothy 1997. Instituting Science: The Cultural Production of Scientific Disciplines. Stanford, CA, USA: Stanford University Press.CrossRefGoogle Scholar
Luan, Jian, and Zhao, Cheng-Min. 2006. “Practicing Data Mining for Enrollment Management and Beyond.” New Directions for Institutional Research 2006 (131): 117122. https://doi.org/10.1002/ir.191.CrossRefGoogle Scholar
Malik, Muhammad Muneeb 2020. “A Hierarchy of Limitations in Machine Learning.” No. arXiv:2002.05193). arXiv https://doi.org/10.48550/arXiv.2002.05193CrossRefGoogle Scholar
McCarthy, John, Minsky, Marvin L., Rochester, Nathaniel, and Shannon, Claude E.. 2006. “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence: August 31, 1955.” AI Magazine 27 (4): 12+. Original work published 1955.Google Scholar
Meier, Gerald M., and Dudley, Sélim H. (1984). Pioneers in Development (Text/HTML No. 9948). World Bank Group. https://documents.worldbank.org/en/publication/documents-reports/documentdetail/389011468137378972/Pioneers-in-developmentGoogle Scholar
Messeri, Lisa, and Crockett, Molly J.. 2024. “Artificial Intelligence and Illusions of Understanding in Scientific Research.” Nature 627 (8002): 4958. https://doi.org/10.1038/s41586-024-07146-0.CrossRefGoogle ScholarPubMed
Mittelstadt, Brent 2019. “Principles Alone Cannot Guarantee Ethical AI.” Nature Machine Intelligence 1 (11): 501507. https://doi.org/10.1038/s42256-019-0114-4.CrossRefGoogle Scholar
Munn, Luke 2023. “The Uselessness of AI Ethics.” AI and Ethics 3 (3): 869877. https://doi.org/10.1007/s43681-022-00209-w.CrossRefGoogle Scholar
Mussgnug, Anna Maria 2022. “The Predictive Reframing of Machine Learning Applications: Good Predictions and Bad Measurements.” European Journal for Philosophy of Science 12 (3): 55. https://doi.org/10.1007/s13194-022-00484-8.CrossRefGoogle Scholar
Mussgnug, Anna Maria 2025. “Technology as Uncharted Territory: Integrative AI Ethics as a Response to the Notion of AI as New Moral Ground.” Philosophy & Technology 38 (3): 106. https://doi.org/10.1007/s13347-025-00938-w.CrossRefGoogle Scholar
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.htmlGoogle Scholar
Nissenbaum, Helen 2009. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA, USA: Stanford University Press.CrossRefGoogle Scholar
Nofre, David 2023. ““Content Is Meaningless, and Structure Is All-Important”: Defining the Nature of Computer Science in the Age of High Modernism, C. 1950–c. 1965.” IEEE Annals of the History of Computing 45 (2): 2942. https://doi.org/10.1109/MAHC.2023.3266359.CrossRefGoogle Scholar
O’Neil, Cathy 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London UK: Penguin UK.Google Scholar
Pathways to Sustainability. (n.d.). Transdisciplinary Field Guide. Retrieved 25 April 2025, from https://www.uu.nl/en/research/transdisciplinary-field-guide/get-started/what-is-transdisciplinary-researchGoogle Scholar
Priya, Anjali, Garg, Anjali, and Tigga, Nirmal Prabha. 2020. “Predicting Anxiety, Depression and Stress in Modern Life Using Machine Learning Algorithms.” Procedia Computer Science 167: 12581267. https://doi.org/10.1016/j.procs.2020.03.442.CrossRefGoogle Scholar
Radford, Jason and Joseph, Kenneth. 2020. “Theory In, Theory Out: The Uses of Social Theory in Machine Learning for Social Science.” No. arXiv:2001.03203). arXiv https://doi.org/10.48550/arXiv.2001.03203CrossRefGoogle Scholar
Rienties, Bart, Køhler Simonsen, Henrik, and Herodotou, Christothea. 2020. “Defining the Boundaries between Artificial Intelligence in Education, Computer-Supported Collaborative Learning, Educational Data Mining, and Learning Analytics: A Need for Coherence.” Frontiers in Education 5. https://doi.org/10.3389/feduc.2020.00128.CrossRefGoogle Scholar
Sapp, Jan 2003. Genesis: The Evolution of Biology. Oxford, UK: Oxford University Press.CrossRefGoogle Scholar
Selbst, Andrew D., Boyd, Danah, Friedler, Suresh A., Venkatasubramanian, Suresh, and Vertesi, Janet (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 5968. https://doi.org/10.1145/3287560.3287598CrossRefGoogle Scholar
Selwyn, Neil 2019. “What’s the Problem with Learning Analytics?Journal of Learning Analytics 6 (3): 1119. https://doi.org/10.18608/jla.2019.63.3.CrossRefGoogle Scholar
Semmelrock, Hannah, Ross-Hellauer, Tony, Kopeinik, Sabine, Theiler, Daniel, Haberl, Andreas, Thalmann, Stefanie, and Kowald, David. 2025. “Reproducibility in Machine-learning-based Research: Overview, Barriers, and Drivers.” AI Magazine 46 (2): e70002. https://doi.org/10.1002/aaai.70002.CrossRefGoogle Scholar
Siemens, George 2013. “Learning Analytics: The Emergence of a Discipline.” American Behavioral Scientist 57 (10): 13801400. https://doi.org/10.1177/0002764213498851.CrossRefGoogle Scholar
Tal, Eran 2023.“Target Specification Bias, Counterfactual Prediction, and Algorithmic Fairness in Healthcare.” Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (New York, NY, USA), AIES 23, 312–21. https://doi.org/10.1145/3600211.3604678.CrossRefGoogle Scholar
Tedre, Matti, 2014. The Science of Computing: Shaping a Discipline. NYC, USA: CRC Press.CrossRefGoogle Scholar
Westerstrand, Sofia 2024. “Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.” Science and Engineering Ethics 30 (5): 46. https://doi.org/10.1007/s11948-024-00507-y.CrossRefGoogle ScholarPubMed
Winne, Philip H. 2020. “Construct and Consequential Validity for Learning Analytics Based on Trace Data.” Computers in Human Behavior 112: 106457. https://doi.org/10.1016/j.chb.2020.106457.CrossRefGoogle Scholar
Wooldridge, Michael 2021. A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We are Going. Flatiron Books.Google Scholar
Zuckerberg, Mark, 2012. “Zuckerberg Letter to Shareholders in Advance of IPO.” Securities and Exchange Commission. https://epublications.marquette.edu/zuckerberg_files_transcripts/48.Google Scholar