Hostname: page-component-7dd5485656-dk7s8 Total loading time: 0 Render date: 2025-10-28T01:58:52.128Z Has data issue: false hasContentIssue false

Handling the hype: Demystifying artificial intelligence for memory studies

Published online by Cambridge University Press:  27 October 2025

Samuel Merrill*
Affiliation:
Department of Sociology and Centre of Digital Social Research, Umeå University , Umeå, Sweden
Mykola Makhortykh
Affiliation:
Institute of Communication and Media Studies, University of Bern , Bern, Switzerland
Silvana Mandolessi
Affiliation:
Department of Literary Theory and Cultural Studies, KU Leuven , Leuven, Belgium
Victoria Grace Richardson-Walden
Affiliation:
Landecker Digital Memory Lab, University of Sussex , Brighton, UK
Rik Smit
Affiliation:
Research Centre for Media and Journalism Studies, University of Groningen , Groningen, The Netherlands
Qi Wang
Affiliation:
Department of Psychology and Culture and Cognition Lab, College of Human Ecology, Cornell University , Ithaca, NY, USA
*
Corresponding author: Samuel Merrill; Email: samuel.merrill@umu.se

Abstract

Artificial Intelligence (AI) has reached memory studies in earnest. This partly reflects the hype around recent developments in generative AI (genAI), machine learning, and large language models (LLMs). But how can memory studies scholars handle this hype? Focusing on genAI applications, in particular so-called ‘chatbots’ (transformer-based instruction-tuned text generators), this commentary highlights five areas of critique that can help memory scholars to critically interrogate AI’s implications for their field. These are: (1) historical critiques that complicate AI’s common historical narrative and historicize genAI; (2) technical critiques that highlight how genAI applications are designed and function; (3) praxis critiques that centre on how people use genAI; (4) geopolitical critiques that recognize how international power dynamics shape the uneven global distribution of genAI and its consequences; and (5) environmental critiques that foreground genAI’s ecological impact. For each area, we highlight debates and themes that we argue should be central to the ongoing study of genAI and memory. We do this from an interdisciplinary perspective that combines our knowledge of digital sociology, media studies, literary and cultural studies, cognitive psychology, and communication and computer science. We conclude with a methodological provocation and by reflecting on our own role in the hype we are seeking to dispel.

Information

Type
Commentary
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Introduction

Artificial intelligence (AI) has reached memory studies in earnest and, while not without precedent (Locke Reference Locke and S2000), academic attention to its mnemonic consequences is growing (Gensburger and Clavert Reference Gensburger and Clavert2024; Hoskins et al. Reference Hoskins, Downey and Lagerkvist2024). This partly reflects the hype surrounding recent technological developments, especially in generative AI (genAI), machine learning, and large language models (LLMs). Markelius et al. (Reference Markelius, Wright, Kuiper, Delille and Kuo2024) discuss four characteristics of this hype: (1) the strategic anthropomorphization of AI systems leading to false perceptions; (2) the proliferation of techno-determinist experts who stress AI’s inevitability; (3) uneven influence over AI narratives; and (4) the insouciant overuse of the ‘AI’ term. While the current AI hype may in some respects already be dissipating and AI’s amplified significance already becoming normalized, the effects of these processes are likely to be long-lasting (Floridi Reference Floridi2024; Widder and Hicks Reference Widder and Hicks2024).

AI systems: machine-based systems designed to function with differing levels of autonomy, which may show signs of adaptiveness after deployment, and that, for explicit or implicit goals, infer from the input they receive, how to generate outputs including predictions, content, recommendations, or decisions that may influence their virtual or physical environments. (EU 2024)

Generative AI (GenAI): AI applications that use different types of machine learning models (including LLMs or generative adversarial networks (GANs)) to synthesize textual, image, and audio content, often (but not necessarily) in response to user prompts.

Machine learning: the subfield of computer science that uses algorithms and statistical models to analyse and draw inferences from data and, in the case of deep learning, to develop AI systems that can learn and adapt without instruction.

Large language models (LLMs): probabilistic machine learning models with many parameters (typically more than a billion) designed to interpret and synthesize responses to human language.

The hype that casts AI as desirable, inevitable, and revolutionary is tied to the efforts of big tech companies to use AI to further monetize the large amounts of data and computational resources they have recently consolidated (Whittaker Reference Whittaker2021). It is also linked, on the demand side, to the societal crises that make technological solutions enticing (Broussard Reference Broussard2023). AI has always been partly about marketing. As computer scientist Jared Lanier admitted: ‘AI is a story we computer scientists made up to help us get funding’ (Reference Lanier2018, 135). Memory scholars are not immune to such impulses. The question guiding this commentary, then, is how can memory studies handle the AI hype so as to ensure we produce nuanced critiques of its mnemonic consequences, whilst challenging the ‘common sense’ views about ‘AI’ that are sold to us?

Exploring this question, we connect with critical AI studies (see Verdegem Reference Verdegem2021; Lindgren Reference Lindgren2023, Reference Lindgren2024) to understand AI’s implications for memory studies and vice versa. Primarily focusing on one form of genAI, namely, transformer-based instruction-tuned text generators (commonly known as ‘chatbots’, such as ChatGPT), we highlight five overlapping areas of critique that can help memory scholars to critically approach these AI systems as they become more mnemonically prevalent. These are: (1) historical critiques that complicate the common historical narrative behind the AI hype and historicize genAI; (2) technical critiques that emphasize how the different components of genAI systems are designed and work; (3) praxis critiques that centre on how people use genAI; (4) geopolitical critiques that recognize how geography and international power dynamics shape the uneven global distribution of genAI and its consequences; and (5) environmental critiques that stress genAI’s ecological impact. For each of these areas, we recount key debates from outside memory studies and consider their implications to our field by highlighting questions and themes that we argue should be central to the ongoing study of genAI within memory studies. Overall, we suggest that these five areas of critiques can serve as complementary lenses to inform thorough, interdisciplinary analyses of the relationships between (gen)AI and memory.

In doing this, we suggest that the AI hype has been explicitly problematized across an array of disciplines, including information science, medical science, computer science, and media studies (see Slota et al. Reference Slota, Fleischmann, Greenberg, Verma, Cummings, Li and Shenefiel2020; Van Assen et al. Reference Van Assen, Banerjee and De Cecco2020; Vrabič Dežman Reference Vrabič Dežman2024 ; Markelius et al. Reference Markelius, Wright, Kuiper, Delille and Kuo2024), predominantly through the adoption of perspectives rooted in critical theory (see Verdegem Reference Verdegem2021; Lindgren Reference Lindgren2023, Reference Lindgren2024). However, memory studies as an interdisciplinary endeavour has yet to explicitly address this matter or draw together the productive critiques being separately pursued by some of its contributory disciplines. In this respect, we also acknowledge the overlap between memory studies and other interdisciplinary fields like heritage studies, but for the purposes of this commentary, we distinguish between them and limit our consideration to the former. While the heritage industry writ large has adopted genAI in a mostly celebratory manner, it should be noted that there is a growing thread of critical research within heritage studies that is both compatible with and helps contextualize the approach we suggest in this commentary (see Foka et al. Reference Foka, Eklund, Løvlie, Griffin and Lindgren2023; Foka and Griffin Reference Foka and Griffin2024).

Furthermore, in this collaborative commentary, we have intentionally limited the scope of our efforts to a conceptual overview that seeks to encourage rather than provide empirical exploration. So, while we use the commentary to indicate existing primary studies and potential lines of further inquiry, we do not seek to outline these in detail. Instead, we aim to move towards a shared critical research agenda that serves as an invitation to all to contribute empirically in the future. Besides this, we seek to share approaches and research insights from outside of memory studies that, we think, can be helpful to the field. In this respect, our commentary pertains to diverse forms of remembrance – cognitive, collected, collective, and connective – and is pitched primarily to wider memory studies communities, including those only starting to engage with AI as a topic of research.

Throughout we try to avoid abstracting AI. We use ‘chatbots’ – LLM-supported transformer-based instruction-tuned text generators – as shorthand for genAI but note that all AI systems sit within wider social, political, cultural and environmental assemblages and involve the fluctuating distribution of mnemonic agency between humans and non-humans (Lagerkvist and Reimer Reference Lagerkvist, Reimer and Lindgren2023; Mandolessi Reference Mandolessi2023; Merrill Reference Merrill and Lindgren2023; Lindgren Reference Lindgren2024; Makhortykh Reference Makhortykh2024; Smit et al. Reference Smit, Smits and Merrill2024). To this end, we provide inset definitions throughout the commentary. We conclude with a methodological provocation and by reflecting on our own complicity in the hype we are seeking to dispel.

Historical critiques

When historicizing current AI developments, it is common to refer to earlier phases of AI growth and stasis as AI ‘summers’ and ‘winters’ (Haigh Reference Haigh2023; Markelius et al. Reference Markelius, Wright, Kuiper, Delille and Kuo2024). However, this reinforces the narrative promoted by big tech of (interrupted) technological progress that will inevitably lead to so-called ‘general’ or ‘strong’ AI. Historical critiques complicate this seasonally inflected narrative by exploring genAI as the outcome of interconnected technological, social, cultural, economic, and political processes (as indicated more in later sections of this commentary) and emphasizing the views of those who narrate AI’s history differently.

General/strong versus narrow/weak AI: ‘general’ or ‘strong’ AI development pursues human-like consciousness and cognitive abilities while ‘narrow’ or ‘weak’ AI systems are restricted to specific tasks without the prerequisite of complex semantic capabilities.

Common histories of AI often begin with the decontextualization of Alan Turing’s oft-quoted question, ‘can machines think?’, shaping present (mis)understandings of genAI (Turing Reference Turing1950; Proudfoot Reference Proudfoot2011). Turing was interested in imitation and distinguishing between ‘discrete-stage’ (computers) and ‘continuous-stage’ (humans) machines. The Turing test that gives ‘computer scientists a sense of direction’ (Stilgoe Reference Stilgoe2023) is, thus, often misremembered as seeking to develop machines that think humanly rather than machines that mimic human thinking, foregrounding the deceitfulness of genAI (see Natale Reference Natale2021).

At the 1956 Dartmouth Summer Research Project on AI, the pursuit of general/strong AI and the idea that all human intelligence could be ‘so precisely described that a machine can be made to simulate it’ took further hold (McCarthy et al. Reference McCarthy, Minsky, Rochester and Shannon2006, 12). Some of its participants were later critical of this view (Minsky Reference Minsky1986) and later leading voices in the field like Joseph Weizenbaum – creator of ELIZA, commonly considered the first chatbot (developed in 1964–67) – criticized early AI boosterism as conservatively promoting technical solutions that left existing power hierarchies intact (Reference Weizenbaum1976; Birhane et al. Reference Birhane, Ruane, Laurent, Brown, Flowers, Ventresque and Dancy2022). More recently the ‘mathe-morphized’ historical narrative of AI that equates the ‘precise description’ of intelligence with ‘mathematical description’ and prioritizes the pursuit of general/strong AI has been further problematized by research on AI, race and indigenous knowledge systems that stresses the existence of multiple intelligences and subjecthoods (Buolamwini Reference Buolamwini2023; Lewis et al. Reference Lewis, Whaanga and Yolgörmez2025; Richardson-Walden and Makhortykh Reference Richardson-Walden and Makhortykh2024).

In complicating notions of a ‘generalised’ intelligence that can be modelled, such research offers alternative perspectives which may better serve and represent the diversity of human, social, and cultural memory, and their possible interfaces with computer systems. These critiques sometimes still imply that LLMs are comprehensive and have autonomous agency (Richardson-Walden and Makhortykh Reference Richardson-Walden and Makhortykh2024), but they also remind us that the rationalist, mathematical way of ordering things that has dominated Western thought since the 18th century (see Foucault Reference Foucault1994) is not the only way. As Lewis et al. (Reference Lewis, Whaanga and Yolgörmez2025) argue, indigenous knowledge systems foreground intelligence as a collectively established relationality rather than a property of isolated individuals, making visible the social and cultural matrices in which notions of ‘intelligence’ emerge and acquire value.

GenAI might then be approached as narrow/weak AI that gains value by posing as general/strong AI (via anthropomorphization) even as its ongoing advancement and integration with other forms of AI complicate this. As part of the future projection of the AI narrative of progress, genAI is frequently credited with excessive levels of agency, reversing Latour’s observation that humans typically attribute limited agency to machines (Reference Latour1987; see Smits and Wevers Reference Smits and Wevers2022). Still, many earlier technologies have had profound effects on memory, and thus it is important that genAI’s mnemonic consequences are sufficiently historicized by interrogating their similarities and differences from older examples of technology-assisted remembrance. While memory scholars have historically conceived technology in a human-centric and instrumentalized manner, opening to wider histories that challenge anthropocentric and anthropomorphized historical accounts that abstract genAI can arguably help us better understand how it contributes to social processes of remembering within which agency is always distributed between human and non-human actors (Merrill Reference Merrill and Lindgren2023; Smit et al. Reference Smit, Smits and Merrill2024).

In short, memory scholars are well-positioned to problematize hype-driven narratives of general/strong AI’s inevitability. They can ask what mnemonic shadows are cast by AI’s dominant historical narratives, how did these narratives emerge and proliferate, and what can be gained by re-reading them against the grain? Here, memory studies scholars can learn from those working in critical AI studies and in media philosophy who contextualize genAI technologies within longer media debates (Natale Reference Natale2021; Lindgren Reference Lindgren2024) and encourage us to consider to what extent the ontologies, epistemologies, and aesthetics underpinning AI are actually radically new (Fazi Reference Fazi2019, Reference Fazi2024). They can also draw on interdisciplinary perspectives by, for instance, combining science and technology studies, literary studies and media studies to reveal how genAI’s accepted historical narratives are linked to the exercise of social and political power in the present (Cave et al. Reference Cave, Dihal and Dillon2024; Magalhães and Smit Reference Magalhães and Smit2025). Alternatively, the analysis of oral histories, memoirs, and autobiographies could provide insight into the memory of AI’s development via the experiences of those directly involved in it. Media archaeological and historical computational science approaches could also be added to the interdisciplinary mix to re-read AI development via, for example, the resurrection and reinterpretation of old and now obsolete computer code (see Kilgrove Reference Kilgrove2025). Ultimately, problematizing the AI hype to understand the relationship between memory and genAI from a historical perspective requires us to be technologically informed, without being technologically deterministic. This leads us to the next area of critique.

Technical critiques

To understand genAI’s implications for memory, it helps to know how it functions. This area of critique problematizes singular, diffuse notions of genAI by stressing the importance of ‘deblackboxing’ (Dixon et al. Reference Dixon, Hsi and Oh2022). This is the effort to make the processes of not only computation but computing generally (i.e. including the effects of sociotechnical infrastructures and power relations on technology) more transparent. This is not straightforward because, as Crawford and Joler (Reference Crawford and Joler2018) have ‘anatomically’ captured, the complexity and scale of genAI systems almost exceeds human imagination in relying on a vast (and rapidly changing) capitalistic matrix of hardware and software and human and non-human relations. On the technical level, this means that it is difficult to explain why (especially more complex) AI systems make concrete decisions even under conditions of full transparency because transparency does not automatically lead to comprehension (Esposito Reference Esposito2022). One way to pursue ‘deblackboxing’ that might be helpful for memory studies researchers is to turn to the computer science texts in which (basic) AI principles and procedures are described and theorized. Exemplifying this, Amoore et al. (Reference Amoore, Campolo, Jacobsen and Rella2023, 1) approach these texts as contested sites ‘through which machine learning shapes the world’. For memory studies scholars, such an approach can help reveal how computer scientists understand and approach memory and the processes of remembering and forgetting in computers and software (see Merrill Reference Merrill and Lindgren2023).

Still, genAI systems can generally be technically understood as involving the three pillars of big datasets, high-performance computing infrastructures, and the machine learning algorithms that are applied to data to synthesize content, all underpinned by computer science and mathematics (Van Assen et al. Reference Van Assen, Banerjee and De Cecco2020). For instance, chatbots, or by their less anthropomorphized name, transformer-based instruction-tuned text generators, are guided by computational training tasks and mathematical principles (e.g. probabilistic reasoning) that are applied to large volumes of training data to create models powering interfaces that respond to user inputs in line with specific patterns in the training data (Smit et al. Reference Smit, Smits and Merrill2024; see Paglen and Downey Reference Paglen and Downey2023 on genAI image creators).

GenAI chatbots: computer programs that simulate conversations usually by using LLMs to examine user inputs and provide responses.

An important principle for many genAI applications is to model a baseline (Chen and Chen Reference Chen and Chen2022). For the LLMs behind chatbots, this baseline is conventionally equated to the statistical prediction of units called tokens (e.g. words, letters, and symbols), based on their training data set–mimicking language patterns but without understanding what they are ‘chatting’ about (Bender et al. Reference Bender, Gebru, McMillan-Major and Shmitchell2021). GenAI systems are, thus, profoundly dependent on their training data, and the computational, and often implicitly cultural, principles prioritized by their designers and developers, including how to approach outliers, parameterization, and randomization. These principles are rarely divulged publicly. Similarly opaque is how these principles are translated into ‘guardrails’ that aim to prevent the misuse of genAI. So too, the training data behind different chatbots, including whether it is original or synthetic (e.g. itself AI-generated), is often a closely guarded secret, even if this determines the semantics of the content produced, the risks for so-called ‘model collapse’, and ethical concerns about data ownership and privacy. Critically, it is also unclear how far designers and developers take into consideration the social impacts of their decisions as their hypothetical ‘user’ becomes millions of users (Salvaggio Reference Salvaggio2025). While some of these technical critiques apply to other technologies also, collectively they raise important questions when considering genAI and memory’s relationship.

Baselines: sets of data points used for training, validating, and testing AI models.

Most fundamentally, we might ask: should genAI be designed to only create factually accurate content, or should it allow greater degrees of mnemonic creativity? Should models forget and machines unlearn (cf. Bourtoule et al. Reference Bourtoule, Chandrasekaran, Choquette-Choo, Jia, Travers, Zhang, Lie and Papernot2021)? What might it mean for memory when future outcomes are statistically modelled on past evidence? How far, then, might relinquishing memory to statistical, rather than cultural, weighting create hegemonic, ‘line of best fit’ forms of remembrance that further diminish the importance of mnemonic outliers? What, in short, are the memory baselines of genAI? Are they based on official historiographies or a greater diversity of sources, and how should the inevitable differences between these be addressed? Are memories that do not match an agreed baseline or fall outside its parameters no longer valid? What might this mean for the contestation of memory and phenomena such as memory activism? Such questions also have implications for the design of genAI systems specifically for mnemonic purposes, whether chatbots in heritage institutions or personalized digital duplicates of historical figures (see Kozlovski and Makhortykh Reference Kozlovski and Makhortykh2025). These and other mnemonic uses should determine the computational logic behind genAI applications, the training data required to implement them, and the guardrails preventing their possible misuse. It encourages memory scholars to be involved in these design processes and decisions, but also discussions about genAI policy, regulation, and law. We consider what all this may mean for users, for example, in terms of who decides on and differentiates between appropriate and inappropriate mnemonic uses of genAI, in the next section dedicated to praxis critiques.

Praxis critiques

A key debate regarding the use of genAI in everyday life relates to whether it will help extend our memory, cognitive capacities, and creativity. Or, alternatively, whether we will offload our memory and knowledge to genAI so much that it compromises our intellectual autonomy. Navigating this debate, which has long characterized the subfield of digital memory studies (Hoskins Reference Hoskins2011, Reference Hoskins2013) depends, in part, on understanding users’ differing levels of expertise and AI literacy (Imundo et al. Reference Imundo, Watanabe, Potter, Gong, Arner and McNamara2024).

Experts in different fields use genAI to acquire synthesized information and feed creative thinking (Javaid et al. Reference Javaid, Haleem and Singh2023; Zhu et al. Reference Zhu, Gao, Wang, Liao, Zheng, Liang, Wang, Pan, Harrison and Ma2024). Their specialized knowledge also allows them to better detect incorrect or incomplete genAI outputs related to their field, although they are not immune to genAI’s errors, especially if they rely on it as an external memory aid (Fisher and Oppenheimer Reference Fisher and Oppenheimer2021; Azaria et al. Reference Azaria, Azoulay and Reches2025). Novices meanwhile may use genAI (e.g. chatbots) to learn because it provides accessible, well-organized, and coherent information, through human-like dialogues, but the lack of specialized knowledge makes novices more vulnerable to errors in the generated content (Fang et al. Reference Fang, Lippert, Cai, Hu, Graesser, Sottilare and Schwarz2019; Hennekeuser et al. Reference Hennekeuser, Vaziri, Golchinfar, Schreiber and Stevens2024). No matter then what a user’s level of domain expertise may be, expertise in using AI or AI literacy is also important.

Domain expertise: the specialized knowledge of a specific field which can provide insight into, amongst other things, operational requirements and constraints of genAI systems, and the sources and limitations of their training data.

There is a need for more research on how genAI–human interactions influence human memory capacities and what boundary conditions shape this process. What might be the optimal conditions that unburden human memory through offloading to genAI while maintaining a critical knowledge base when partnering with genAI? How might the discrepancies in human mnemonic capacity that can be both diminished and exaggerated by genAI relate to expertise, but also socioeconomic, racial, gender, or cultural differences? The role of memory scholars here could expand beyond studying genAI’s praxis-related implications to becoming ‘domain experts’ that can co-design AI and influence its surrounding policy and regulation towards cognitively advantageous and societally just outcomes.

Recognizing the impact of differential expertise also connects to debates about whether genAI enables or endangers human mnemonic agency. Does the ability of genAI to synthesize ‘the past’ deprive our memory of authenticity and render our life stories anti-autobiographical (Hoskins Reference Hoskins2024)? Or do individuals and groups retain power in making decisions and choices over what to remember and what to forget despite the technological hype (Wang Reference Wang2019)?

The urgency of these questions is stressed by genAI ‘chatbots’ possessing the illusionary appearance of human features that seem to reduce or replace human agency. They can assume various human-like personas, perform highly on tasks that require sensitivity to human emotions, and provide instant insights on complex intellectual questions via dialogues that reinforce the illusion that they possess human-like consciousness (Elyoseph et al. Reference Elyoseph, Hadar-Shoval, Asraf and Lvovsky2023; White et al. Reference White, Fu, Hays, Sandborn, Olea, Gilbert, Elnashar, Spencer-Smith and Schmidt2023). Chatbots can also automatically acquire and generate information about individual and collective pasts without human approval or control (Hoskins Reference Hoskins2024) and shape the mnemonic agency of human users according to how they have been designed and developed, for example, in terms of their parameters of possible interaction.

Designers and developers, as humans, command their own forms of mnemonic agency (Smit et al. Reference Smit, Smits and Merrill2024). Humans also retain agency, in addition to their expertise or AI literacy. They can decide when, how, under what circumstance, and for what purpose to use genAI, just as when they confront other digital technologies (Wang Reference Wang2019). They often provide both the prompts and the data – typically in the form of information about their individual and collective pasts shared on social media (see Wang and Hoskins Reference Wang and Hoskins2024 ) – which are then used to train LLMs and ultimately shape what chatbots ‘reassemble’ for them as ‘memories’. Neither is their remembering only restricted to the content that these chatbots provide them – context is also important with humans and genAI prompting each other to remember (Smit et al. Reference Smit, Smits and Merrill2024). Historicizing genAI, this process is not unlike the transactive, dialogical, and phenomenological constructions of one’s autobiographical memory that occur within other online settings (Wang Reference Wang2022; Merrill Reference Merrill, Ekelund, Guðmundsdóttir, Merrill and Sindbæk Andersenforthcoming). It is thus critical that memory scholars work to disentangle how human mnemonic agency interacts with that of genAI and avoid totalizing and sensationalist prognoses of the loss of human agency, which contributes to the AI hype. Indeed, there are good arguments for thinking of the ‘A’ in ‘AI’ differently – in terms of augmentation rather than artificiality (Dekeyser and Whitehead Reference Dekeyser and Whitehead2025).

Geopolitical critiques

Beyond problematizing how users from various global demographics may remember differently with genAI, applying a geopolitical critique highlights how global power dynamics – particularly involving the EU, US, and China – shape the development and distribution of genAI (Larsen Reference Larsen2022; Kennedy Reference Kennedy2025) and, in turn, the global battle to control public narratives regarding historical and contemporary events. With AI historically embedded in capitalist logics and driven by government funding priorities and military support for scientific research (Nilsson Reference Nilsson2010; see also Pilkington Reference Pilkington2024), the current hype can thus be understood in the context of major geopolitical and economic uncertainty and AI’s increasing use in warfare. Indeed, the companies behind the most prominent commercial ‘chatbots’ are now pivoting towards military contracts (O’ Donnell Reference O’ Donnell2024).

The competition for global genAI dominance and thus political and mnemonic influence has been intensified by the rising economic, technological, military, and political power of China in recent decades. Although the US still holds an edge in advanced AI systems, China is catching up quickly through the development of open-source LLMs, strategic investments, and government support (Kennedy Reference Kennedy2025). Pertaining to memory, this genAI geopolitical rivalry will likely increasingly contribute to and coalesce with globally polarizing debates regarding historical nihilism and historical revisionism. Relatedly, there is recent evidence of US-owned genAI ‘chatbots’ being used to promote Russian geopolitical interests by aiding the censorship of undesired pasts (Urman and Makhortykh Reference Urman and Makhortykh2025).

Global South countries, including in Africa, Latin America, and Southeast Asia, also play an active role in genAI geopolitics through alliances and by setting regulations (see Feakin Reference Feakin2025), but a geopolitical perspective also highlights the uneven global distribution of genAI’s costs and benefits between the Global North and South. Many ‘chatbots’, for example, work discriminatorily by silencing, and under- or misrepresenting different minority groups across the world (Okolo Reference Okolo and Lindgren2023). This perpetuates and amplifies unequal global power relations, further disempowering already marginalized communities and countries. The size of LLM training datasets does not guarantee their diversity (Bender et al. Reference Bender, Gebru, McMillan-Major and Shmitchell2021). At every stage of their curation – from initial online participation to data collection and fine-tuning, including reinforcement learning with human feedback, current practices favour hegemonic perspectives (Smit et al. Reference Smit, Smits and Merrill2024). Even while genAI can have democratizing mnemonic effects at certain scales, in general, memories of certain hegemonic groups are widely represented, while those related to marginalized groups are underrepresented or absent. As such, the datafied memory of Global South cultures is often missing due to ‘algorithmic exclusion’ (Albert and Delano Reference Albert, Delano and Lindgren2023), ‘digital cultural colonialism’ (Kizhner et al. Reference Kizhner, Terras, Rumyantsev, Khokhlova, Demeshkova, Rudov and Afanasieva2021) and other processes of digital suppression, including, in some contexts, state-led censorship.

Fine-tuning: the further training of an AI model on a specific dataset to improve its performance with respect to a certain task.

Reinforcement learning with human feedback: A fine-tuning technique that uses human judgement of genAI content to further train AI models.

GenAI’s under-representation of non-hegemonic memory is partly linked to the dominance of English in computational linguistics (see Joshi et al. Reference Joshi, Santy, Budhiraja, Bali and Choudhury2020; Bender et al. Reference Bender, Gebru, McMillan-Major and Shmitchell2021). GenAI, thus, can often perpetuate colonial knowledge regimes that disregard alternative ways of understanding and interpreting the world (Birhane and Talat Reference Birhane, Talat and Lindgren2023; Lewis et al. Reference Lewis, Whaanga and Yolgörmez2025). In turn, genAI’s ‘average collective memory’, its mnemonic ‘line of best fit’, can hide a richer diversity of remembrance cultures (Makhortykh Reference Makhortykh2024; Schuh Reference Schuh2024). This process not only reinforces existing power imbalances regarding which memories can be accessed and which cannot – new forms of memory imperialism – but also erases the distinct meanings that memory, forgetting, or trauma may hold for marginalized communities. Meanwhile, the rapid emergence of LLMs in high-resource languages like Chinese and Russian (e.g. CT-LLM or YandexGPT) but also low-resource languages like Kazakh and Swahili (e.g. KazLLM or UlizaLlama) offers alternatives that may foreground other renderings of the past.

GenAI systems also rely on an uneven international division of digital labour (Fuchs Reference Fuchs2014). They are designed and owned by powerful companies in the Global North, but various stages of their implementation are outsourced to the Global South. A significant part of the data collection pipeline – data labelling – relies, for instance, on ‘ghost workers’ in the latter. While this labour helps generate massive earnings, the profits are captured by others, creating a stark disparity between the millions earned by data labelling companies and the low wages of their workers (Okolo Reference Okolo and Lindgren2023). Similarly, ‘chatbots’ have been tested in refugee camps without adhering to the rigorous ethical procedures applied elsewhere. These practices rework and revitalize ‘colonial genealogies through processes of extraction, coloniality, control, and discrimination’ (Madianou Reference Madianou2025, 18).

Questioning and overcoming these global mnemonic power imbalances should lie at the heart of memory studies’ genAI-related concerns. We need to ask: will global competition for AI dominance benefit end users by creating more divergent narratives, views, and perspectives or polarize the world even further? Whose memories are being preserved, transformed, or erased by genAI? Can genAI accommodate the plurality of mnemonic epistemologies across cultures, or are they only able to reproduce hegemonic views? How might memory scholars work alongside communities affected by digital erasure to resist genAI’s technocolonial logics? Can genAI systems be reimagined not as tools of erasure but as platforms for restorative memory work – and if so, under what ethical and political conditions? Such questions are especially important given that those communities least likely to benefit from genAI are also often the most vulnerable to its harms, especially those of an environmental character.

Environmental critiques

Environmental critiques of AI and its surrounding hype complicate claims that digital technology, through its specific temporalities, has radically reformulated human remembering and forgetting, and that this is exacerbated by genAI (Ernst Reference Ernst2013; Hoskins Reference Hoskins2013; Hoskins Reference Hoskins2024). Whilst notions of decay time and entanglement have been crucial in digital memory studies, this has predominantly focused on acknowledging the relationality of self and machine, and self and data (Hoskins Reference Hoskins2013, Reference Hoskins, Hajek, Lohmeier and Pentzold2015, Reference Hoskins2024). Widening focus to emphasize the wider environment, brings the discussion of AI’s mnemonic implications and, in turn, the subfield of digital memory studies into the realm of ‘fourth wave’ memory studies research (see Erll Reference Erll2024). Expanding focus to the broader geological plane of ‘media’ (to include all meaning-making matter), attention is drawn to planetary decay happening at alarming rates, accelerated by an increasing dependence on digital technology (Parikka Reference Parikka2015; Crawford Reference Crawford2021).

This focus resonates with memory studies’ ‘anthropocentric turn’ and the growing consideration of ‘planetary memory’ (Bond et al. Reference Bond, Rapson and de Bruyn2018), the ‘deep-time of the earth’ (Chakrabarty Reference Chakrabarty2009), ‘terrestrial memory’ (Golańska Reference Golańska2023), and attention to those human and non-human relations in ways that hold us ‘responsible and accountable for our actions towards’ the other (Kennedy Reference Kennedy, Heise, Christensen and Niemann2017, 506). Zooming out further, it brings the wider organic ecosystems of our planet (and beyond) into view. To resist the genAI hype through an environmental critique, then, is to expand the spatial and temporal dimensions of our focus away from only interactions with, and the outputs produced by genAI, towards the entanglement of deep time ingrained in, yet hidden by, the convenient instancy of genAI interfaces. As Reading argues, we need to go further, beneath and beyond the surface level of ‘the skin or screen of digital memory’ (Reference Reading2014, 753; see also Loots Reference Loots2024). This goes even further than tracing the capitalistic technological infrastructures and human relations on which genAI relies, in also foregrounding the actors involved in the production of genAI in terms of resources and energy consumption, and the long-term consequences of extraction and power use.

Exemplifying this, Crawford and Joler’s ‘anatomy of AI’ exceeds the technical in acknowledging that whilst our ‘encounters with AI are fleeting and brief’, behind each lies the ‘interlaced chains of resource extraction, human labour and algorithmic processing across networks of mining, logistics, distribution, prediction and optimization’ (Reference Crawford and Joler2018). Thus, whilst typing a quick prompt into a chatbot might seem frivolous, the energy use that enables computation and the material resources required to create, maintain, and extend the hardware necessary for ever-expanding data storage and processing are disruptive to existing ecosystems. As Crawford elsewhere notes, ‘from the perspective of deep time, we are extracting Earth’s geological history to serve a split second of contemporary technology time’ (Reference Crawford2021, 31).

In principle, there is nothing new here. Media – digital or otherwise – have always relied on substantial material extraction, production, and waste (Maxwell and Miller Reference Maxwell and Miller2012; Parikka Reference Parikka2015), and the tension between the illusion of immateriality (e.g. the ‘cloud’, the AI ‘black box’) and this continuous geological deep time of media is built into the algorithmic logic of computing. Yet, the mainstreaming of genAI, which promises forms of enhanced computing, and working at complexities, scales, and speeds beyond human capacities, makes us feel unintelligent, irrelevant, and somewhat powerless in its shadow. Meanwhile, these systems propel the destruction of our ecosystems irreparably, leaving their imprint on the planet eternally.

The field of memory studies has paid relatively little attention to the broader material consequences of the ‘often obfuscated environmental exploitation and friction between capital and labour that go into these newer forms of mediated memory’ (Reading Reference Reading2014, 749; see Loots Reference Loots2024). Adopting an environmental critique to genAI calls on us as memory scholars to go beyond the interface encounter and the illusion and deception of this experience (Natale Reference Natale2021). How can we be attentive to what is not visible in that moment? This demands an infrastructural approach to memory construction; that is, there is a need to scrutinize how current AI-enabled memory practices and technologies are materially supported. Memory scholars could make visible how our planetary and supra-planetary environments and resources – the deep time of our planet and universe – are problematically entangled with our sociotechnical systems of memory. A focus on the environmental dimension of genAI, thus, helps shift our critical gaze towards the memory of the Earth and its universal neighbours. Ultimately, this focus offers a posthuman approach to the entanglements of humans and nonhumans in producing genAI and creating memory. This ‘holds the potential to cultivate response-able forms of memory, reshaping how essential interdependence is practiced in the everyday rituals of living and remembering within our more-than-human world’ (Gündoğan İbrişim Reference Gündoğan İbrişim2024, 101).

Conclusion

This commentary has explored how memory scholars might handle the AI hype by highlighting several lines of critique through which to interrogate genAI’s nexus with memory. In doing so, it has considered what memory is in relation to the technologies captured under the ‘AI’ label. Our key takeaway is that the mnemonic study of AI should be specific. Two interlinked questions help reveal this specificity. Firstly, what sort of ‘AI’ is under scrutiny? Secondly, what form of memory (in relation to AI) is the object of study? At present, most attention in the field seems to be on genAI, but this too demands specification. For example, instead of asking what is the impact of genAI on memory, a more specific research question would be how does the use of transformer-based instruction-tuned text generators shape public memory of political conflicts? Or how do personal digital assistants enable new forms of interaction with a family’s past? Or how do image generators fabricate historical representations? Or what are the environmental impacts of heritage institutions’ uses of machine learning and cloud services? The list could go on. The point is that for memory studies research to contribute to understanding genAI, it needs to be specific. This will also help demonstrate the value of (or indeed potentially temper) the plethora of concepts already circulating around genAI and memory.

Specificity demands detailed empirical research and methodological rigor. Thus, we close our commentary with a call for memory scholars to commit to adopting and, where necessary, developing robust methods that allow the close empirical study of the relationship between genAI and memory. Especially, we believe the field will benefit from empirical research that is conscious of the five areas of critique outlined here. While not exhaustive, these encourage empirical research that: (1) problematizes the common histories of AI and historicizes the continuities and ruptures in technology-assisted remembrance that genAI represents, (2) seeks to understand how the disparate technological components of different genAI systems are designed and work in relation to memory, (3) centres on how different types of people practice and experience memory with genAI, (4) acknowledges how global power dynamics surrounding genAI’s production and distribution has implications for different memory cultures, and (5) explores the overlaps of memory studies’ environmental and digital subfields. Not all these can necessarily be covered in depth in a single study. However, macro-perspectives that consider the historical and technical specificities of genAI systems, acknowledge what they make visible in praxis, while reckoning with the broader geopolitical and environmental consequences of genAI-use that are often invisible, can serve as valuable points of departure and contextualization for studies that might go on to focus on one or more of these areas of critique in more detail. In short, memory scholars would do well to start any critical investigation into genAI (or any other AI) and memory by considering how it could be read through all these different areas of critiques, before narrowing the focus of their study.

These areas of critique, as we have demonstrated, also offer opportunities for collaborative interdisciplinary research, with no single discipline fully equipped to pursue any of them in isolation. Such an interdisciplinary approach to (gen)AI in memory studies then holds the possibility of building on concerns in philosophy, psychological, and cognitive sciences regarding what is ‘remembering’ (to echo broader debates about what is ‘thinking’) in the AI age, complementing but also complicating this ‘remembering’ by situating it in broader technological, social, cultural, economic, political, and environmental contexts. This could be achieved by learning from critical AI studies, whilst remembering the long history of debates – regarding earlier (both pre-digital and digital) technologies – in media studies and sociology. Such an endeavour should still, however, seek to remain sensitive to how particular AI systems work and the computational logics underpinning them, as well as how they are used, thus engaging with the broader field of AI development in computer sciences and again disciplines like sociology and social psychology. Rather than being led by specific case studies and immediately identifying a rupture caused by an abstract ‘AI’, we suggest it might be more fruitful to begin by considering a chosen case in terms of the five areas of critique highlighted in this commentary, simultaneously exploring the depth and breadth of relationships between AI and memory that the case can foreground. Such an approach would immediately help demystify ‘AI’.

The possible approaches for memory studies that we have described throughout this commentary point also to an emerging ethics for studying genAI and memory. Such an ethics demands that memory scholars (ourselves included) interrogate their own complicity in the AI hype. Given what is known about the negative consequences of genAI, ethical tensions arise regarding how far our future empirical analysis of genAI should rely on the technology itself, for instance, when using it to understand AI history (Volynskaya Reference Volynskaya2024) or how users remember with it (Smit et al. Reference Smit, Smits and Merrill2024). Can memory research always justify the use of these extractive and damaging systems? Might more conventional methods like interviews sufficiently capture the effect/affect of genAI for memory? Likewise, is our call for ‘memory domain experts’ merely a more morally insulated way of jumping on the bandwagon? Does this commentary itself benefit from and feed the hype? Is its rehearsal of arguments well-known in certain academic quarters for a new memory studies audience, indicative of a mode of academic work that is increasingly aligning with the pleasing and (over-) productive logics of genAI itself? These are the sorts of uncomfortable questions that we all need to ask ourselves when engaging with genAI as a research topic. This commentary has not sought to resolve these quandaries outright, nor the many others raised by one of memory studies’ newest research objects, instead it has aimed to highlight them in the hope that as a field we are able to commit to critically working through them, handling the hype and our complicity therein as we go.

Data availability statement

There are no primary data associated with this commentary.

Acknowledgements

We would like to acknowledge the positive and productive feedback of the collection’s guest editors and the commentary’s three peer reviewers as well as the early input provided by Thomas Smits, who was originally part of the author team.

Funding statement

Samuel Merrill’s contribution to the commentary was supported by the Swedish Research Council. Mykola Makhortykh’s contribution was supported by the Alfred Landecker Foundation as part of the project titled ‘Algorithmic turn in Holocaust memory transmission: Challenges, opportunities, threats’. Silvana Mandolessi’s contribution was supported by the European Union’s Horizon 2020 research and innovation programme under the grant agreement No 677955 (Digital Memories). Victoria Grace Richardson-Walden’s contribution was supported by funding from the Alfred Landecker Foundation via the Landecker Digital Memory Lab, University of Sussex. Qi Wang’s contribution is supported by a Hatch grant from the US National Institute of Food and Agriculture. The above funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests

The authors declare none.

Samuel Merrill is an Associate Professor at Umeå University’s Department of Sociology and Centre for Digital Social Research (DIGSUM) in Northern Sweden. He specializes in digital and cultural sociology, and his research interests concern, among other things, the intersections between memory and digital technology, social media platforms, and AI systems.

Mykola Makhortykh is an Alfred Landecker lecturer at the University of Bern’s Institute of Communication and Media Science, where he studies the impact of algorithmic systems and AI on Holocaust memory transmission.

Silvana Mandolessi is an Associate Professor of Cultural Studies at KU Leuven, Belgium. Her research examines the impact of the digital turn on memory practices, with a particular emphasis on Latin America.

Victoria Grace Richardson-Walden is a Full Professor of Digital Heritage, Memory and Culture, Director of the Landecker Digital Memory Lab, and Deputy Director of the Weidenfeld Institute for Jewish Studies at the University of Sussex. Her research explores human–computer entanglements in digital memory cultures.

Rik Smit is a Senior Lecturer at the Centre for Media and Journalism Studies at the University of Groningen, the Netherlands. He studies the relationships between memory and digital platforms, algorithmic culture, and AI models.

Qi Wang is Joan K. and Irwin M. Jacobs Professor of Human Development, Psychology, and Cognitive Science at Cornell University. Her research focuses on the self and mental time travel in the cultural context of beliefs, ideologies, practices, media, technology, and more.

References

Albert, K and Delano, M (2023) Algorithmic exclusion. In Lindgren, S (Ed.), Handbook of Critical Studies of Artificial Intelligence. Edward Elgar, pp. 538548. https://doi.org/10.4337/9781803928562.00056.CrossRefGoogle Scholar
Amoore, L, Campolo, A, Jacobsen, B and Rella, L (2023) Machine learning, meaning making: On reading computer science texts. Big Data and Society 10(1). https://doi.org/10.1177/20539517231166887.CrossRefGoogle Scholar
Azaria, A, Azoulay, R and Reches, S (2025) ChatGPT is a remarkable tool—For experts. Data Intelligence 6(1), 240296. https://doi.org/10.1162/dint_a_00235.CrossRefGoogle Scholar
Bender, EM, Gebru, T, McMillan-Major, A and Shmitchell, S (2021) On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922.Google Scholar
Birhane, A, Ruane, E, Laurent, T, Brown, MS, Flowers, J, Ventresque, A and Dancy, CL (2022) The forgotten margins of AI ethics. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery. https://doi.org/10.1145/3531146.3533157.Google Scholar
Birhane, A and Talat, Z (2023) It’s incomprehensible: On machine learning and decoloniality. In Lindgren, S (Ed.), Handbook of Critical Studies of Artificial Intelligence. Edward Elgar, pp. 128140. https://doi.org/10.4337/9781803928562.00016.CrossRefGoogle Scholar
Bond, L, Rapson, JK and de Bruyn, B (Eds.) (2018) Planetary Memory in Contemporary American Fiction. Routledge.Google Scholar
Bourtoule, L, Chandrasekaran, V, Choquette-Choo, CA, Jia, H, Travers, A, Zhang, B, Lie, D and Papernot, N (2021). Machine unlearning. In 2021 IEEE Symposium on Security and Privacy. IEEE, pp.141159. https://doi.org/10.1109/SP40001.2021.00019.CrossRefGoogle Scholar
Broussard, M (2023) More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. MIT Press. 10.7551/mitpress/14234.001.0001CrossRefGoogle Scholar
Buolamwini, J (2023) Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Random House.Google Scholar
Cave, S, Dihal, K and Dillon, S (Eds.) (2024) AI Narratives: A History of Imaginative Thinking about Intelligent Machines. Oxford University Press.Google Scholar
Chakrabarty, D (2009) The climate of history: Four theses. Critical Inquiry 35(2), 197222. https://doi.org/10.1086/596640.CrossRefGoogle Scholar
Chen, RH and Chen, C (2022) Artificial Intelligence: An Introduction for the Inquisitive Reader. Chapman and Hall. 10.1201/9781003214892CrossRefGoogle Scholar
Crawford, K (2021) Atlas of AI. Yale University Press.Google Scholar
Crawford, K and Joler, V (2018) Anatomy of an AI system. Available at https://anatomyof.ai/ (accessed 29 April 2025).Google Scholar
Dekeyser, T and Whitehead, M (2025) What is artificial about artificial intelligence? A provocation on a problematic prefix. AI and Society 40, 33713372 https://doi.org/10.1007/s00146-024-02114-8.CrossRefGoogle Scholar
Dixon, C, Hsi, S, and Oh, H (2022) From Unblackboxing to Deblackboxing: Questions about what to make visible in computational making. In ACM CHI Conference on Human Factors in Computing - CHI 22 Workshop: CHI ‘22 Workshop: Reimagining Systems for Learning Hands-on Creative and Maker Skills. Association for Computing Machinery. Available at https://par.nsf.gov/biblio/10347967.Google Scholar
Elyoseph, Z, Hadar-Shoval, D, Asraf, K and Lvovsky, M (2023) ChatGPT outperforms humans in emotional awareness evaluations. Frontiers in Psychology 14, 1199058. https://doi.org/10.3389/fpsyg.2023.1199058.CrossRefGoogle ScholarPubMed
Erll, A (2024) Transculturality and the eco-logic of memory. Memory Studies Review 1(1), 1735. https://doi.org/10.1163/29498902-20240002.CrossRefGoogle Scholar
Ernst, W (2013) Digital Memory and the Archive. MIT Press.Google Scholar
Esposito, E (2022) Does explainability require transparency? Sociologica 16(3), 1727. https://doi.org/10.6092/issn.1971-8853/15804.Google Scholar
EU (2024) The European Union Artificial Intelligence Act. Available at http://data.europa.eu/eli/reg/2024/1689/oj.Google Scholar
Fang, Y, Lippert, A, Cai, Z, Hu, X and Graesser, AC (2019) A conversation-based intelligent tutoring system benefits adult readers with low literacy skills. In Sottilare, RA and Schwarz, J (Eds.), Adaptive Instructional Systems: First International Conference, AIS 2019. Springer, pp. 604614. https://doi.org/10.1007/978-3-030-22341-0_47.CrossRefGoogle Scholar
Fazi, BM (2019) Can a machine think (anything new)? Automation beyond simulation. AI and Society 34(4), 813824. https://doi.org/10.1007/s00146-018-0821-0.CrossRefGoogle Scholar
Fazi, BM (2024) The computational search for unity: Synthesis in generative AI. Journal of Continental Philosophy 5(1), 3156. https://doi.org/10.5840/jcp202411652.CrossRefGoogle Scholar
Feakin, T (2025) A.I. geopolitics beyond the U.S.-China Rivalry: The role of the global south. Aspen Digital. Available at https://www.aspendigital.org/blog/ai-geopolitics-beyond-the-us-china-rivalry/.Google Scholar
Fisher, M and Oppenheimer, DM (2021) Who knows what? Knowledge misattribution in the division of cognitive labor. Journal of Experimental Psychology: Applied 27(2), 292306. https://doi.org/10.1037/xap0000310.Google ScholarPubMed
Floridi, L (2024) Why the AI hype is another tech bubble. Philosophy and Technology 37, 128. https://doi.org/10.1007/s13347-024-00817-w.CrossRefGoogle Scholar
Foka, A and Griffin, G (2024) AI, cultural heritage, and bias: Some key queries that rise from the use of genAI. Heritage 7(11), 61256136. https://doi.org/10.3390/heritage7110287.CrossRefGoogle Scholar
Foka, A, Eklund, L, Løvlie, A S and Griffin, G (2023) Critically assessing AI/ML for cultural heritage: Potentials and challenges. In Lindgren, S (Ed.), Handbook of Critical Studies of Artificial Intelligence. Edward Elgar, pp. 173186. https://doi.org/10.4337/9781803928562.00082.Google Scholar
Foucault, M (1994) The Order of Things: An Archaeology of the Human Sciences. Vintage.Google Scholar
Fuchs, C (2014) Social Media: A Critical Introduction. Sage. 10.4135/9781446270066CrossRefGoogle Scholar
Gensburger, S and Clavert, F (2024) Is artificial intelligence the future of collective memory? Memory Studies Review 1(2), 195208. https://doi.org/10.1163/29498902-202400019.CrossRefGoogle Scholar
Golańska, D (2023) Memorializing the unspectacular: Towards a minor remembrance. Memory Studies 16(6), 15791593. https://doi.org/10.1177/1750698023120233.CrossRefGoogle Scholar
Gündoğan İbrişim, D (2024) Feminist Posthumanism, environment and “response-able memory”. Memory Studies Review 1(1), 93111. https://doi.org/10.1163/29498902-20240004.CrossRefGoogle Scholar
Haigh, T (2023) There was no “First AI Winter”. Communications of the ACM. Available at https://cacm.acm.org/opinion/there-was-no-first-ai-winter/.10.1145/3625833CrossRefGoogle Scholar
Hennekeuser, D, Vaziri, DD, Golchinfar, D, Schreiber, D and Stevens, G (2024) Enlarged education – exploring the use of generative AI to support lecturing in higher education. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-024-00424-y.Google Scholar
Hoskins, A (2011) Media, memory, metaphor: Remembering and the connective turn. Parallax 17(4), 1931. https://doi.org/10.1080/13534645.2011.605573.CrossRefGoogle Scholar
Hoskins, A (2013) The end of decay time. Memory Studies 6(4), 387389. https://doi.org/10.1177/1750698013496197.CrossRefGoogle Scholar
Hoskins, A (2015) Archive me! Media, memory, uncertainty. In Hajek, A, Lohmeier, C and Pentzold, Christian (Eds.), Memory in a Mediated World, Palgrave Macmillan, pp. 1325. https://doi.org/10.1057/9781137470126_2.Google Scholar
Hoskins, A (2024) AI and memory. Memory, Mind and Media 3, e18. https://doi.org/10.1017/mem.2024.16.CrossRefGoogle Scholar
Hoskins, A, Downey, A and Lagerkvist, A (Eds.) (2024) AI and memory collection. Memory, Mind and Media. Available at: https://www.cambridge.org/core/journals/memory-mind-and-media/memory-mind-and-media-collections/ai-and-memory-collection.10.1017/mem.2024.16CrossRefGoogle Scholar
Imundo, MN, Watanabe, M, Potter, AH, Gong, J, Arner, T and McNamara, DS (2024) Expert thinking with generative chatbots. Journal of Applied Research in Memory and Cognition 13(4), 465484. https://doi.org/10.1037/mac0000199.CrossRefGoogle Scholar
Javaid, M, Haleem, A and Singh, RP (2023) ChatGPT for healthcare services: An emerging stage for an innovative perspective. BenchCouncil Transactions on Benchmarks, Standards and Evaluations 3(1), 100105. https://doi.org/10.1016/j.tbench.2023.100105.CrossRefGoogle Scholar
Joshi, P, Santy, S, Budhiraja, A, Bali, K and Choudhury, M (2020) The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pp. 62826293. https://doi.org/10.18653/v1/2020.acl-main.560.CrossRefGoogle Scholar
Kennedy, R (2017) Multidirectional eco-memory in an era of extinction. Colonial whaling and indigenous dispassion in Kim Scott’s that Deadman dance. In Heise, UK, Christensen, J and Niemann, M (eds) The Routledge Companion to the Environmental Humanities. Routledge, pp. 268277.Google Scholar
Kennedy, M (2025) America’s AI Strategy: Playing Defense while China Plays to Win. Wilson Centre. Available at https://diplomacy21-adelphi.wilsoncenter.org/article/americas-ai-strategy-playing-defense-while-china-plays-win.Google Scholar
Kilgrove, K (2025) ‘ELIZA,’ the World’s 1st Chatbot, was just resurrected from 60-year-old computer code. Live Science. Available at https://www.livescience.com/technology/eliza-the-worlds-1st-chatbot-was-just-resurrected-from-60-year-old-computer-codeGoogle Scholar
Kizhner, I, Terras, M, Rumyantsev, M, Khokhlova, V, Demeshkova, E, Rudov, I and Afanasieva, J (2021) Digital cultural colonialism: Measuring bias in aggregated digitized content held in Google arts and culture. Digital Scholarship in the Humanities 36(3), 607640. https://doi.org/10.1093/llc/fqaa055.CrossRefGoogle Scholar
Kozlovski, A and Makhortykh, M (2025) Digital dybbuks and virtual golems: AI, memory, and the ethics of holocaust testimony. Memory, Mind and Media, e10. doi:10.1017/mem2025.10006.CrossRefGoogle Scholar
Lagerkvist, A and Reimer, B (2023) Bothering the binaries: Unruly AI futures of hauntings and hope at the limit. In Lindgren, S (Ed.), Handbook of Critical Studies of Artificial Intelligence. Edward Elgar, pp. 199208. https://doi.org/10.4337/9781803928562.00023.CrossRefGoogle Scholar
Lanier, J (2018) Ten Arguments for Deleting your Social Media Accounts Right Now. Random HouseGoogle Scholar
Larsen, B (2022) The Geopolitics of AI and the Rise of Digital Sovereignty, Brookings Institution. Available at https://coilink.org/20.500.12592/swc5mh.Google Scholar
Latour, B (1987) Science in Action: How to Follow Scientists and Engineers through Society. Open University Press.Google Scholar
Lewis, JE, Whaanga, H and Yolgörmez, C (2025) Abundant Intelligences: Placing AI within Indigenous Knowledge Frameworks. AI and Society. 40, 21412157. https://doi.org/10.1007/s00146-024-02099-4.CrossRefGoogle Scholar
Lindgren, S (Ed.) (2023) Handbook of Critical Studies of Artificial Intelligence. Edward Elgar. 10.4337/9781803928562CrossRefGoogle Scholar
Lindgren, S (2024) Critical Theory of AI. Polity Press.Google Scholar
Locke, C (2000) Digital memory and the problem of forgetting. In S, Radstone (Ed.), Memory and Methodology. Berg, pp. 2536.Google Scholar
Loots, O (2024) Head in the clouds: A Deleuzoguattarian analysis of the environmental impacts of digital memory. Memory Studies 18(4). 950969. https://doi.org/10.1177/17506980241276421.CrossRefGoogle Scholar
Madianou, M (2025) Technocolonialism: When Technology for Good Is Harmful. Polity PressGoogle Scholar
Magalhães, JC and Smit, R (2025) Less hype, more drama: Open-ended technological inevitability in journalistic discourses about AI in the US, the Netherlands, and Brazil. Digital Journalism, 118. https://doi.org/10.1080/21670811.2025.2522281.CrossRefGoogle Scholar
Makhortykh, M (2024) Shall the robots remember? Conceptualising the role of non-human agents in digital memory communication. Memory, Mind and Media 3, e6. https://doi.org/10.1017/mem.2024.2.CrossRefGoogle Scholar
Mandolessi, S (2023) The digital turn in memory studies. Memory Studies 16(6), 15131528. https://doi.org/10.1177/17506980231204201.CrossRefGoogle Scholar
Markelius, A, Wright, C, Kuiper, J, Delille, N and Kuo, YT (2024) The mechanisms of AI hype and its planetary and social costs. AI and Ethics 4(3), 727742. https://doi.org/10.1007/s43681-024-00461-2.CrossRefGoogle Scholar
Maxwell, R and Miller, T (2012) Greening the Media. Oxford University PressGoogle Scholar
McCarthy, J, Minsky, M, Rochester, N and Shannon, C (2006) A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 1214. https://doi.org/10.1609/aimag.v27i4.1904.Google Scholar
Merrill, S (2023) Artificial intelligence and social memory: Towards the cyborgian remembrance of an advancing mnemo-technic. In Lindgren, S (Ed.), Handbook of Critical Studies of Artificial Intelligence. Edward Elgar, pp. 173186. https://doi.org/10.4337/9781803928562.00020.CrossRefGoogle Scholar
Merrill, S (Forthcoming) Forays into a Posthuman phenomenology of memory: Remembering-in-the-world with Facebook’s ‘memories’ feature. In Ekelund, R, Guðmundsdóttir, G, Merrill, S and Sindbæk Andersen, T (Eds.), Hybrid Memory: Mnemonic Practices and Agencies in a Post-Digital World. Brill Publishing.Google Scholar
Minsky, M (1986) The Society of Mind. Simon and Schuster.Google Scholar
Natale, S (2021) Deceitful Media: Artificial Intelligence and Social Life after the Turing Test. Oxford University Press. 10.1093/oso/9780190080365.001.0001CrossRefGoogle Scholar
Nilsson, NJ (2010) The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge University Press.Google Scholar
O’ Donnell, J (2024) OpenAI’s new defense contract completes its military pivot. MIT Technology Review. Available at https://www.technologyreview.com/2024/12/04/1107897/openais-new-defense-contract-completes-its-military-pivot/ (accessed 29 April 2025).Google Scholar
Okolo, CT (2023) Addressing global inequity in AI development. In Lindgren, S (Ed.), Handbook of Critical Studies of Artificial Intelligence. Edward Elgar, pp. 378389. https://doi.org/10.4337/9781803928562.00040.CrossRefGoogle Scholar
Paglen, T and Downey, A (2023) Influencing machines: Trevor Paglen and Anthony Downey. Digital War 6(3). https://doi.org/10.1057/s42984-024-00098-9.Google Scholar
Parikka, J (2015) A Geology of Media. University of Minnesota Press. 10.5749/minnesota/9780816695515.001.0001CrossRefGoogle Scholar
Pilkington, D (2024) Myopic memory: Capitalism’s new continuity in the age of AI. Memory, Mind and Media 3, e24. https://doi.org/10.1017/mem.2024.21.CrossRefGoogle Scholar
Proudfoot, D (2011) Anthropomorphism and AI: Turing’s much misunderstood imitation game. Artificial Intelligence 175(5–6), 950957. https://doi.org/10.1016/j.artint.2011.01.006.CrossRefGoogle Scholar
Reading, A (2014) Seeing red: A political economy of digital memory. Media, Culture & Society 36(6), 748760. https://doi.org/10.1177/0163443714532980.CrossRefGoogle Scholar
Richardson-Walden, VG and Makhortykh, M (2024) Imagining human-AI memory Ssymbiosis: How re-remembering the history of artificial intelligence can inform the future of collective memory. Memory Studies Review 1(2), 323342. https://doi.org/10.1163/29498902-202400016.CrossRefGoogle Scholar
Salvaggio, E (2025) It’s interesting because. Cybernetic Forests. Available at https://mail.cyberneticforests.com/its-interesting-because/ (accessed 29 April 2025).Google Scholar
Schuh, J (2024) AI as artificial memory: A global reconfiguration of our collective memory practices? Memory Studies Review 1(2), 231255. https://doi.org/10.1163/29498902-202400012.CrossRefGoogle Scholar
Slota, S C, Fleischmann, K R, Greenberg, S, Verma, N, Cummings, B, Li, L and Shenefiel, C (2020) Good systems, bad data?: Interpretations of AI hype and failures. Proceedings of the Association for Information Science and Technology, 57(1), e275. https://doi.org/10.1002/pra2.275.CrossRefGoogle Scholar
Smit, R, Smits, T and Merrill, S (2024) Stochastic remembering and distributed mnemonic agency: Recalling twentieth century activists with ChatGPT. Memory Studies Review 1(2), 209230. https://doi.org/10.1163/29498902-202400015.CrossRefGoogle Scholar
Smits, T and Wevers, M (2022) The agency of computer vision models as optical instruments. Visual Communication 21(2), 329349. https://doi.org/10.1177/1470357221992097.CrossRefGoogle Scholar
Stilgoe, J (2023) We need a Weizenbaum test for AI. Science 381(6658). https://doi.org/10.1126/science.adk0176.CrossRefGoogle Scholar
Turing, A (1950) Computing machinery and intelligence. Mind 59(236), 433460. https://doi.org/10.1093/mind/LIX.236.433.CrossRefGoogle Scholar
Urman, A and Makhortykh, M (2025) The silence of the LLMs: Cross-lingual analysis of guardrail-related political bias and false information prevalence in ChatGPT, Google bard (Gemini) and Bing Chat. Telematics and Informatics 96(2025), 102211. https://doi.org/10.1016/j.tele.2024.102211.CrossRefGoogle Scholar
Van Assen, M, Banerjee, I and De Cecco, CN (2020) Beyond the artificial intelligence hype: What lies behind the algorithms and what we can achieve. Journal of Thoracic Imaging 35(Supplement 1), S3S10. https://doi.org/10.1097/RTI.0000000000000485CrossRefGoogle ScholarPubMed
Verdegem, P (2021) Ai for Everyone? Critical Perspectives. University of Westminster Press.10.16997/book55CrossRefGoogle Scholar
Volynskaya, A (2024) Collective memory through computer memories: Retracing and interpreting the archive of the Stanford artificial intelligence laboratory. Memory Studies Review 1(2), 343363. https://doi.org/10.1163/29498902-202400017.CrossRefGoogle Scholar
Vrabič Dežman, D (2024) Promising the future, encoding the past: AI hype and public media imagery. AI and Ethics, 4(3), 743756. https://doi.org/10.1007/s43681-024-00474-x.CrossRefGoogle Scholar
Wang, Q (2019) The individual mind in the active construction of its digital niche. Journal of Applied Research in Memory and Cognition 8, 2528. https://doi.org/10.1016/j.jarmac.2018.12.005.CrossRefGoogle Scholar
Wang, Q (2022) The triangular self in the social media era. Memory, Mind and Media 1, e4, 112. https://doi.org/10.1017/mem.2021.6.CrossRefGoogle Scholar
Wang, Q and Hoskins, A (2024) The Remaking of Memory in the Age of the Internet and Social Media. Oxford University Press.Google Scholar
Weizenbaum, J (1976) Computer Power and Human Reason: From Judgment to Calculation. W.H. Freeman and Company.Google Scholar
White, J, Fu, Q, Hays, S, Sandborn, M, Olea, C, Gilbert, H, Elnashar, A, Spencer-Smith, J, and Schmidt, DC (2023) A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv preprint arXiv:2302.11382. https://doi.org/10.48550/arXiv.2302.11382.CrossRefGoogle Scholar
Whittaker, M (2021) The steep cost of capture. Interactions 28(6), 5055. https://doi.org/10.1145/3488666.CrossRefGoogle Scholar
Widder, DG and Hicks, M. (2024) Watching the generative AI hype bubble deflate. arXiv preprint arXiv:2408.08778. https://doi.org/10.48550/arXiv.2408.08778.CrossRefGoogle Scholar
Zhu, Y, Gao, J, Wang, Z, Liao, W, Zheng, X, Liang, L, Wang, Y, Pan, C, Harrison, EM and Ma, L (2024) ClinicRealm: Re-evaluating Large Language Models with Conventional Machine Learning for Non-Generative Clinical Prediction Tasks. arXiv preprint arXiv:2407.18525. https://doi.org/10.48550/arXiv.2407.18525.CrossRefGoogle Scholar