Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-25T06:41:40.908Z Has data issue: false hasContentIssue false

Myopic memory: Capitalism's new continuity in the age of AI

Published online by Cambridge University Press:  21 November 2024

Danny Pilkington*
Affiliation:
University of Glasgow, UK

Abstract

This article is a commentary on the relationship between artificial intelligence (AI), capitalism, and memory. The political policies of neoliberalism have reduced the capacity of individuals and groups to reflect on and change the social world, meanwhile applications of AI and algorithmic technologies, rooted in the profit-seeking objectives of global capitalism, deepen this deficit. In these conditions, memory in individuals and across society is at risk of becoming myopic. In this article, I develop the concept of myopic memory with two core claims. Firstly, I argue that AI is a technological development that cannot be divorced from the capitalist conditions from which it comes from and is implemented in service of. To this end, I reveal capitalism and colonialism's historical and contemporary use of surveillance as a way to control the populations it oppresses, imagining their pasts to determine their futures, disempowering them in the process. My second core claim emphasises that this process of disempowerment is undergoing an acute realisation four decades into the period of neoliberalism. Neoliberal policies have restructured society on the basis of being an individual consumer, leaving little time, space, and institutional capacity for citizens to reflect on their impact or challenge their dominance. As a result, with the growing role of AI and algorithmic technologies in shaping our engagement with society along similar lines of individualism, it is my conclusion that the scope of memory is being reduced and constrained within the prism of capitalism, reducing its potential, and rendering it myopic.

Type
Commentary
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Introduction

The increasing impact of artificial intelligence (AI) and smartphone technology comes at a unique juncture in the history of contemporary society, which is to say the history of the capitalist mode of production. At a global level, capitalism has been the prevailing economic, political, ideological, and social context for over a century, with neoliberalism as its guiding principle since the 1980s. As a result, all production during this period – be it the production of materials or knowledge – has been influenced by the context of (neoliberal) capitalism.

The nature of this influence is contested. From a Marxist perspective, the capitalist system is one which harnesses every area of social life and production towards its main purpose: capital accumulation (Parenti Reference Parenti1997, 122, 132–135). This means that the social world, the field of production, is a space where interests are either aligned with or in opposition to the objective of profit accumulation. Our agency, the things we do, produce, and control, as individuals and as a society, is relative to our position in this structure (Bourdieu Reference Bourdieu1987, 2).

Accepting this reality as integral to the motivations behind and outcomes of technological and social development, it is also central to this commentary on the relationship between AI and individual and collective memory. I propose here that the factors driving AI's implementation and trajectory, from weapons of war to social media, have the same capitalist and neoliberal roots as those impacting upon and weakening society's capacity for critical thought, reflection, and action. With the availability of incomprehensible amounts of information, in an era where the space for collective comprehension has been replaced by an infinite spectrum of individualistic consumerism, there is a risk that individual and collective memory – and, by extension, society's critical faculty – is on a myopic course.

It is precisely at a time when neoliberalism has restructured society on the basis of being an individualistic consumer, with narrow scope for individual or institutional opposition to this principle (Gilbert and Williams Reference Gilbert and Williams2022, 42, 77), that we are becoming increasingly dependent on technology that encourages us to retreat into highly personalised yet opaque algorithmic realities. Anything is possible in our own virtual worlds and feeds – our relationships can be as we want them to be. There, infinite choice and personalisation gives us a sense of power. Yet just as we have limited control over aspects of life such as housing, employment, privacy, and community, our virtual worlds are owned and controlled by unaccountable Silicon Valley elites. Their use, however ostensibly empowering and practical, is conditional on the forfeiture of personal and collective agency.

We use AI to augment our memory and understanding, just as AI uses us to enrich its database for providing that memory and understanding. This creates a memory loop or feed (Hoskins et al. Reference Hoskins, Čimová and Pilkingtonforthcoming); one where both components are conditioned by the framework of capitalism. The risk I identify here is that the loop becomes a spiral of capitalist hegemony, with each rotation alienating humans further and further from control of their own conditions, memories, and selves. Capitalism has long normalised the commodification of life and self. Yet with the astonishing scale of AI, whose ostensibly all-seeing and all-knowing capacity gives it a veneer of objectivity, in an age where there is no time to think about problems, only to solve them, the solutions of capitalism may soon be the only ones we are able to conceive.

There is a considerable body of work exploring the new memory ecologies of the 21st century. Theories of connective memory (Barnier and Hoskins Reference Barnier and Hoskins2022) and grey memory (Hoskins and Halstead Reference Hoskins and Halstead2021), for example, consider the impact of information overload and hyper-connected obscurantism in the digital age. Here, I propose that the current juncture in capitalist hegemony can be understood as an experience of myopic memory. This is where deep understanding is the enemy of instant gratification, where the capacity for critical action suffers with the prevalence of content consumption, and where the scope for agency in our lives is supplanted by one of a utility that is often technocratic and highly politicised.

The aim of this short commentary is to provide a preliminary conceptual framework for further empirical research and theoretical debate.

I develop the concept of myopic memory with two core claims:

  1. 1. To trade in human memory for AI memory is to narrow the scope of our understanding to the prism of capitalism.

    AI, predicated on data accumulation, is currently developed, produced, and implemented within the context of a system whose primary objective is capital accumulation, meaning AI-generated or AI-supported memory is laden with the objectives of capitalism. Therefore, it is a memory with an explicit purpose, not necessarily in keeping with the interests of individuals and groups positioned less favourably in relation to capital. Uncritically accepting the memory bias at technology's backend is to narrow the scope through which we conceptualise ourselves and the world.

  2. 2. Under neoliberalism, we don't have the time or space to be critical: remembering is inconvenient.

    Decades of neoliberal policies and ideas have alienated the working class from material security and organisational capacity. Soaring inequality between rich and poor, both within and between nations, has made society extremely precarious. There is limited time and space for deep comprehension and reflection. In this real-world context, cultivating a critical perception and organising a political challenge is inconvenient; it is much easier to survive through a virtual experience of life made simple by utility apps for bureaucratic digital navigation, uncomplicated relationships, and distractive dopamine escapes.

These claims and their relationship are expanded upon in the sections that follow. It is my conclusion and central argument that society is at risk of experiencing a collective myopy due to neoliberalism's reconstitution of memory, in the individualised age of the data commodity, as something algorithmically produced and accepted rather than mediated by a wider range of factors and social groups. Capitalism has long put limitations on our agency; it does so now under conditions that acutely undermine our capacity to ask why, or better still, to do anything about it.

Capitalist technological development under AI: Same game, new rules

My first core claim, on the myopic risks of uncritically accepting a capitalist version of past, present, and future, lays out capitalism's enduring history of using technology to distort and augment how we see ourselves and the world.

Anderson (Reference Anderson2006, 160–185) recalls the legacies of Western colonialism in southeast Asia, where subjugated populations were continually categorised through censuses and mapmaking to formalise the means by which their given status precluded certain rights. Anderson (Reference Anderson2006, 169) notes that these processes ensured populations were ‘mapped from on high’. Parallels can be naturally drawn with the age of AI, where individuals are constantly mapped from on high and exploited through algorithmic decisions that reflect an imagined ‘self’ or profile of the individual – through locations, spending patterns, and clicks – one that has been rendered from humanly incomprehensible amounts of personal data.

In 19th-century British colonial Malaya, censuses forced an extraordinary and ‘continuously agglomerated, disaggregated, recombined, intermixed, and reordered’ categorisation of subjugated Malay people (Anderson Reference Anderson2006, 163–164). Highly racialised categorical identity distinctions in Dutch East India Company Indonesia were imagined, quantified, and perpetuated to serve political ends. Indeed, these could see one's census categorisation determine how they ‘dress, reside, marry, be buried, and bequeath property’ (Anderson Reference Anderson2006, 168). The process Anderson outlines here is one of a deep alienation from one's own legacy, where the ‘official’ and highly political depiction has an enduring impact on the material realities of life and death.

Colonial states, driven by capitalism and technological developments in capitalism (then: print, now: AI) ‘did not merely aspire to create, under [their] control, a human landscape of perfect visibility; the condition of this “visibility” was that everyone, everything, had (as it were) a serial number’ (Anderson Reference Anderson2006, 184–185). This suggests that the model was one of dehumanisation – indeed, Césaire (Reference Césaire2000, 42) proposes that ‘colonisation = thingification’ (cited in Downey (Reference Downey2021)). Now, we are ‘things’ tracked and profiled with the most comprehensive serial number ever known: our digital footprint. The salient matter is establishing the extent of the risk posed by this extreme iteration of capitalism's longstanding tendency to categorise, objectify, and track us.

Many in the field of AI – including OpenAIFootnote 1 (creators of ChatGPTFootnote 2) – state their concern with the hypothetical risk of a ‘superintelligent’ AI ‘going rogue’ and threatening humanity (Leike and Sutskever Reference Leike and Sutskever2023). This is both obfuscatory and ironic; it kicks responsibility into the long grass. Capitalism already has a long-established precedent for crafting and maintaining a ‘superintelligence’ over the people it oppresses, defining their histories, and using it to map out their futures. Yet because neoliberal politics defers risk from the level of the state to that of the individual, companies can disavow the actual harm they cause now by deferring risk to a hypothetical point in the future.

But what might this look like in the age of AI?

AI is broadly understood as the capacity of a non-human machine to learn through repetition and recognition to the point where it can replicate human rationality in its actions (de-Lima-Santos and Ceron Reference de-Lima-Santos and Ceron2022, 14; Gil De Zúñiga et al. Reference Gil De Zúñiga, Goyanes and Durotoye2024, 30). A central feature of the advanced level of AI is its generative capacity. Generative artificial intelligence (GAI)Footnote 3 such as ChatGPT is powered by large language models (LLMs) that memorise patterns in data to predict future patterns. LLMs are able to make predictions after learning about millions, billions, or trillions of parameters (options and probabilities), derived from existing data available online such as articles, posts, and books (Mearian Reference Mearian2024).

These technological definitions provide important insight into the scale and potential of AI, both positive and negative. Yet it is the context in which AI is being produced and implemented that is of interest here. Forged under the pressure of global capitalism, whose remit drives technological and cultural developments in service of processes of profit accumulation (Mandel Reference Mandel and Marx1990 [1976]), AI is at once developing from and diligently reproducing a particular set of structural conditions.

The growing presence and influence of big tech conglomerates are a contemporary realisation of Lenin's (Reference Lenin2021 [1917]) theory that capitalism would produce monopolies, and that these would inhibit rather than encourage ‘healthy’ market competition. In January 2024, the Federal Trade Commission (FTC), the United States’ trade regulator, opened an investigation into whether the immense investment in AI technology from Microsoft, Amazon, and Google amounts to a breach of competition rules (Montgomery and Paul Reference Montgomery and Paul2024). Meanwhile, Sam Altman, co-founder of OpenAI and AI de-regulation lobbyist, has said that AI will ‘most likely lead to the end of the world, but in the meantime, there'll be great companies’ (Lovely Reference Lovely2024). Thus, the rampant emergence and advancement of technologies such as ChatGPT, and any benefits or threats posed, is inseparable from the context of capital accumulation and monopolisation as systemic economic objectives and outcomes.

This context pervades and shapes political outcomes too. With UK regulators concerned by the potential for LLMs to embed biases and distort markets, the government is reportedly developing legislation that will regulate AI (Gross and Criddle Reference Gross and Criddle2024). This suggests a reluctant departure from its ‘pro-innovation’ rejection of regulation in the past (Mosolova Reference Mosolova2023). The shift perhaps marks a recognition of the EU's ‘ground-breaking’ new AI ActFootnote 4, which aims to ‘set a global standard for AI regulation’ by classifying and prohibiting AI with obligations according to risk.

It is worth noting that AI regulation has existed for some time, yet there have been exemptions in the fields of policing, security, and migration services, meaning that legislation is vague and unable to provide society with greater democratic control while allowing private technology companies a stake in matters of public democracy (O'Shea Reference O'Shea2024). In this setting, the AI past can have a grave impact on present and future realities for vulnerable groups.

Both EU and US AI and immigration policies have failed to protect the privacy and rights of migrants; even the details of the new AI Act concerning border technologies and immigration fall short of the human rights and privacy-based standards advocated within academic research (Mengesha et al. Reference Mengesha, Dunn and Luangrath2024; Molnar Reference Molnar2023, Reference Molnar2024a). The world of AI regulation, and the crossover between the private sector and public sector in the way AI is applied to our daily lives, is incredibly murky, as capitalist states wrestle with AI's usefulness (read: profitability) vs the need to ensure it is only used on their terms.

Under the guise of risk assessment, North African and Middle Eastern migrants crossing the Mediterranean to seek asylum in Europe during the last decade have had every step of their journeys scrutinised, categorised, and assessed using a range of unregulated and experimental technology including surveillance drones, AI lie detectors, and robo-dogs (Molnar Reference Molnar2023; Tyler Reference Tyler2022). This ‘increasingly lucrative border industrial complex’ is predicated on an ‘opaque and discretionary world’ of border policing and security underpinned by historical and systemic structures of racism and discrimination (Molnar Reference Molnar2023).

Here, AI decision-making technology, very much in an experimental technological phase and clearly in contradiction with questions of ethics and human rights, has been loaded with longstanding biases in order that these may be amplified and applied to present political realities. Migrants have become ever more marginalised from the factors which determine their future, while an AI arbiter renders it from a political imagining of their past. In this highly racialised application of AI, asylum seekers yield all subjectivity to a two-pronged process of objectification. Firstly, because their material conditions become determined by their ‘self’ not recalled or revealed but applied to them by AI, and secondly, because they are dehumanised to the point of being an object of capitalist technological experimentation.

There is no more horrific an example of this dehumanisation than in that which Israel inflicts upon the Palestinian people, as part of an occupation that The United Nations General Assembly has deemed to be unlawful,Footnote 5 and Amnesty International refers to as a system of apartheid.Footnote 6 Israel's campaign in Gaza is being heard in the International Court of Justice (ICJ) under allegations of genocide,Footnote 7 and for which prosecutors from the International Criminal Court (ICC) believe Israeli Prime Minister Benjamin Netanyahu could bear responsibility for alleged war crimes and crimes against humanity.Footnote 8

Israel, whose chief arms exporters include the UK, US, and EU states such as Germany and France,Footnote 9 uses the ‘Lavender’ and ‘Where's Daddy?’ AI systems to produce targets for its bombing campaign in Gaza. The AI identifies (produces) targets through a debilitating and politicised surveillance of every aspect of Palestinian people's lives, and a dehumanising mechanism of social scoring that is now banned under the EU's own AI Act.Footnote 10 An investigation by +972 Magazine and Local Call revealed that the Israeli military barely scrutinised Lavender AI decisions on bombing targets despite knowing that the system made errors around 10% of the time, and that the Where's Daddy? system specifically bombed targets once they had entered their family homes (Abraham Reference Abraham2024). In the age of the data commodity, a once considered dystopian level of surveillance and violence threat is in fact a daily reality for the people of Palestine.

The ‘us’ that exists in data is incredibly valuable, and for those with control of the required technologies, it can be the determining factor in how and whether we exist. Downey (Reference Downey2021, 79–80) argues that while colonialism was built on a dehumanising process of occupation, labour, and wealth extraction that ‘deferred, if not truncated’ future realities, neocolonial data extraction and surveillance ‘establishes and, increasingly, pre-determines if not controls the future’. With this shifting character of imperialism in the age of AI, not only soil but cloud is ripe for colonisation. The future in these conditions is generated from data that is mediated by an algorithm rather than anything resembling a transparent let alone a democratic process.

Steyerl (Reference Steyerl2023) refers to a new ‘battle for the commons’, where ‘information, memories, [and] creativity’ exist in a chaotic digital public realm, owned by Big Tech and then rented back to us. It is an ‘open’ space that is in fact constrained by the implications of monopolistic control, a site of knowledge production claiming to benefit from common input yet in fact signalling an era of ‘automated common sense’ where ‘tech oligarchs consolidate their cultural hegemony through automated diffusion’. Data scraped from across the digital landscape holds the promise of diversity yet is instead stripped of its critical capacity and rendered homogenous by the general conditions of its lease. The AI memory is one of automated capitalist hegemony.

In her analysis of the deep-rooted biases and oppressive historical structures that underpin border technologies, Petra Molnar (Reference Molnar2024b, 6) says: ‘Technology is often presented as being neutral, but it is always socially constructed. All technologies have an inherently political dimension’. Notwithstanding the already politicised nature of border policing, migration, and war, when we apply AI to decision-making processes, it is important to remember that the AI's capacity for objective reasoning is constrained by the conditions of its production and implementation. It is simply an incomprehensibly large-scale aggregation, perpetuation, and distortion of the information we give it and the reasons we do so. The capacity of AI to weigh up immense quantities of data creates an illusion of objectivity, or rationality, yet the conclusions it reaches about people's histories reflect a highly prejudiced logic, defined by the capitalist system.

Therefore, to cede control of memory to AI is to cede control of memory to capitalism and its beneficiaries. As I have shown, this tendency has a long and enduring lineage in the history of capitalist technological development. A central risk here, as I discuss below, is one of timing; AI is coming of age during a period where neoliberalism has sharpened capitalism's retrenchment of individual and collective agency.

Remembering is inconvenient: The neoliberal assault on society's critical faculty

My second core claim explores the relationship between our increased dependency on artificial and individualistic technological solutions, and our alienation from the conditions where democratic solution-building occurs.

As with the production of technology, our collective and personal memories similarly reflect the social conditions under which knowledge, understanding, and recollection occurs. Tulving's (Reference Tulving2002) conceptualisation of memory is useful here. Tulving argues that human memory is unique in its tendency to build on semantic memories (a storage of general facts) with episodic memories (personally experienced events). Semantic memories, those recollections of events happening, are a mere starting point for episodic memories, our remembering and re-experiencing of how the event occurred and what it meant. The event is only given shape by its context, the set of social relations that prompt us to remember it in a certain way.

Neoliberal policies and subsequent social relations have reconstituted citizenship as the individualised and solitary pursuit of private wealth accumulation, at the expense of all other forms of social and cultural advancement (Harvey Reference Harvey2007, 35–36). In the UK, the post-war period was a time of relatively increased stability and reduced scarcity, which encouraged society to widen some of its democratic demands. To protect profits, neoliberalism sought to destabilise these conditions with a reinstatement of precarity. Gilbert and Williams (Reference Gilbert and Williams2022, 64–65) summarise this process in action:

Precarity, debt, and a generalised increase in average hours worked per week have created a situation in which groups and individuals simply have far less time and opportunity than they once had to engage in political organisation, struggle or reflection. None of this is accidental. (Emphasis added)

Neoliberal memory, then, is one of fragmentation and individuality. The resulting social world is one where democratic demands are replaced by consumeristic wants for tools that make life easier. Practical solutions for surviving crises are available and deliver immediate rewards; putting an end to crises is a bit more complicated. There are apps to help us deal with everything, including other app-created problems, multiple layers to a digital bureaucracy wherein everything is ostensibly being made easier to do, from ordering drinks on an airplane (Stewart Reference Stewart2023) to socialising (Cantor Reference Cantor2024). The defining purpose of these apps is utility; their essence is a commodification of hyper-individualised living, compelling us to buy more tech and forfeit more privacy year on year, app on app, and swipe on swipe (Hadero Reference Hadero2024).

This is evident from the technological solutions which simultaneously emerge from and create the loneliness crisis (Cantor Reference Cantor2024). Companionship apps such as AnyaFootnote 11 and ReplikaFootnote 12 provide ‘the AI companion who cares … always here to listen and talk … always on your side … an AI companion who is eager to learn and would love to see the world through your eyes’ (Replika). Users report that the use of AI chatbots for relationships has been beneficial for their wellness, stimulating rather than displacing their real-life relationships, and even preventing suicidal action (Maples et al. Reference Maples, Cerit, Vishwanath and Pea2024, 5). At the same time, these are users who may already be vulnerable and experience disproportionately high levels of loneliness, with an increased likelihood to view the Replika bot as more human than machine (Maples et al. Reference Maples, Cerit, Vishwanath and Pea2024, 5). Indeed, critics argue that chatbots inhibit humans’ emotional development as they limit exposure to real-world relationships rooted in conflict, compromise, and self-improvement (Hadero Reference Hadero2024). Thus, the chatbots, which ‘see the world through your eyes’, encourage a myopic retreat from this aspect of public life.

But what are the implications of this kind of retreat on knowledge, memory, and collective consciousness? Jager (Reference Jager2024) applies Hal Draper's Marxist interpretation of ‘idiocy’, which casts aspersions over the political apathy arising from private lives that withdraw concern for public matters (Attoh Reference Attoh2017, 198). The theory determines that a retreat from public life into individualised pursuits amounts to an increased ‘idiocy’ in society, which is not indicative of reduced intelligence but of ‘a fundamentally private predisposition – a retreat from public life, which implie[s] a generally unreflective attitude toward one's own opinions and views, let alone a coherent ideology’ (Jager Reference Jager2024).

Jager notes that this is not an AI-generated phenomenon. Rather, it is an iteration of a centuries-old implication of capitalism, and capitalism's destruction of physical and intellectual spaces for collective debate and conflict. Rooted in the American Dream's imperative that everyone pursues the solitary act of achieving financial wealth for themselves and their families alone, capitalism forces a retreat from collective endeavour. Neoliberalism sharpens this imperative, with practical implications, for example, on physical ‘third spaces’, such as the now decimated working-men's clubs of the late 20th century that were ‘designed neither for work nor consumption’ but for socialising (for example, watching a film together and talking about it) (Jager Reference Jager2024).

Thus, the loneliness crisis and associated retreat from physical and intellectual spaces which encourage collective reflection on events – with a view to collectively debating their interpretation, meaning, and future implications – is a pre-existing phenomenon, a product of capitalism's decaying effect on public life. Yet, it has an acute realisation in the age of AI. Of its contemporary, AI-driven iteration, Jager continues: ‘These [AI chatbot and dating app] fixes have both push and pull effects: once in existence, they rearrange the very notion of what intimacy means, while increased isolation only encourages more usage of the app’ (Jager Reference Jager2024).

This highly alienating dependency occurs beyond the realm of relationship chatbots. Internet and social media addiction is redefining the meaning and importance of authenticity and history altogether. Apps like Upscaling HistoryFootnote 13 use AI cloning to tell us what Hitler, Mussolini, and Lenin ‘would have sounded like in English’. There is an AI that tells us what it thinks Jack the Ripper's face would have looked like (Landsel Reference Landsel2024). These applications of AI do not hold history to account; they speculate, without scientific rigour, for entertainment. The gimmickification of history has arrived.

In a similar vein, Usher (Reference Usher2024) analyses the social media ‘content’ phenomenon as it occurs within hugely popular and lucrative boxing bouts involving social media ‘influencers’. We now have an algorithmically driven ‘cultural economy that rewards attention and engagement over artistry and genuine skill…It doesn't matter how competent these influencers are at fighting – as long as its ‘good content’ nobody cares’.

What does it take to be a good boxer? Can anyone remember? While being bombarded with social media content offering fragmented and surface-level realities, too many and too overwhelming to comprehend in any depth,Footnote 14 is anyone likely to find out? With content engagement of greater commercial value than content comprehension, what hope is there for memories that don't fit the mould?

Chang and Lee (Reference Chang and Lee2024) observe that internet addiction in adolescents results in a decreased capacity to process semantic memories, encode memories, and plan using the working memory. In this context, one where young people have limited space for individual and collective reflection, and an internet addiction that negatively impacts their cognition, memory has, at best, a puncher's chance. Meanwhile, society continues to spiral towards myopy, alienating its citizens increasingly further from meaning, truth, authenticity, and control.

Conclusion

The system which provides the framework and motivation for production is inseparable from that which is produced, be it knowledge, memory, interpretation, or technology. Understanding AI, then, and its potential role in how individuals and society remember and forget events, and conceptualise their presents and futures, is to understand the ways in which AI developments and our capacity to engage with them are products of the system giving shape to this and every other structural aspect of our lives. It is therefore of no coincidence that neoliberal society is increasingly structured on the basis that our algorithms, although highly personalised, serve a hegemonic worldview, one that affords users little consideration of the disparity between consumer choice and collective control. The concept of myopic memory that I have sketched out here aims to encourage critical reflection on where AI comes from, what it is being used for, and why. Any assessment of the merit, or technological potential of AI, must take into consideration this context.

Danny Pilkington is a postgraduate researcher of sociology at the University of Glasgow. His research interests include media power, ideology, and hegemony. His PhD thesis explores hegemony within media production and content, focusing on British media coverage of Latin American politics.

Footnotes

2 ChatGPT.

3 Generative artificial intelligence (GAI) in education – GOV.UK (www.gov.uk).

4 Artificial intelligence (AI) act: Council gives final green light to the first worldwide rules on AI (europa.eu).

5 n2426648.pdf (un.org).

6 Human rights in Israel and the Occupied Palestinian Territory Amnesty International.

7 Declaration of intervention by Chile (icj-cij.org).

8 ICC prosecutor urges judges to urgently rule on warrants for Israeli, Hamas officials | Reuters.

9 Arms exports to Israel must stop immediately: UN experts | OHCHR.

10 Israel's Killer AI (stopkiller.ai).

11 Anya (pmfm.ai).

12 Replika.

13 Upscaling History | Upscaling & Voice Cloning Historical Footage | Patreon.

14 how-people-focus-and-live-in-the-modern-information-environment.pdf (kcl.ac.uk).

References

Abraham, Y (2024) ‘Lavender’: The AI machine directing Israel's bombing spree in Gaza. +972 Magazine. Available at https://www.972mag.com/lavender-ai-israeli-army-gaza/ (accessed 23 September 2024).Google Scholar
Anderson, BRO (2006) Imagined Communities: Reflections on the Origin and Spread of Nationalism: Vol, Revised Edition. London: Verso.Google Scholar
Attoh, K (2017) Public transportation and the idiocy of urban life. Urban Studies 54(1), 196213. https://doi.org/10.2307/26151332.CrossRefGoogle Scholar
Barnier, A and Hoskins, A (2022) Editorial 1: Journeys, cases and conversations: An introduction to Memory, Mind & Media. Memory, Mind & Media 1, e19. https://doi.org/10.1017/MEM.2022.12.CrossRefGoogle Scholar
Bourdieu, P (1987) What makes a social class? On the theoretical and practical existence of groups. Berkeley Journal of Sociology 32, 117.Google Scholar
Cantor, M (2024) Virtual living rooms, adult ‘after-school clubs’ and AI lovers: My search for a fix to modern loneliness. The Guardian. Available at https://www.theguardian.com/society/article/2024/jun/04/loneliness-digital-connection-cures (accessed 06 June 2024).Google Scholar
Césaire, A (2000) Discourse on Colonialism, Pinkham J (trans.). New York: Monthly Review Press. Available at https://www.jstor.org/stable/j.ctt9qfkrm.4 (accessed 23 September 2024).Google Scholar
Chang, MLY and Lee, IO (2024) Functional connectivity changes in the brain of adolescents with internet addiction: A systematic literature review of imaging studies. PLoS Mental Health 1(1), e0000022. https://doi.org/10.1371/JOURNAL.PMEN.0000022.CrossRefGoogle Scholar
de-Lima-Santos, MF and Ceron, W (2022) Artificial intelligence in news media: Current perceptions and future outlook. Journalism and Media 3(1), 1326. https://doi.org/10.3390/JOURNALMEDIA3010002.CrossRefGoogle Scholar
Downey, A (2021) The algorithmic apparatus of neo-colonialism: Or, can we hold ‘operational images’ to account? The Nordic Journal of Aesthetics 30(61–62), 7882. https://doi.org/10.7146/nja.v30i61-62.127862.CrossRefGoogle Scholar
Gilbert, J and Williams, A (2022) Hegemony Now: How Big Tech and Wall Street Won the World (and How We Win It Back). London: Verso.Google Scholar
Gil De Zúñiga, H, Goyanes, M and Durotoye, T (2024) A Scholarly Definition of Artificial Intelligence (AI): Advancing AI as a Conceptual Framework in Communication Research. https://doi.org/10.1080/10584609.2023.2290497.CrossRefGoogle Scholar
Gross, A and Criddle, C (2024) UK rethinks AI legislation as alarm grows over potential risks. Financial Times. Available at https://www.ft.com/content/311b29a4-bbb3-435b-8e82-ae19f2740af9 (accessed 04 June 2024).Google Scholar
Hadero, H (2024) AI girlfriends and boyfriends are making their mark amid artificial intelligence boom: ‘I know she's a program…but the feelings, they get you. Fortune. Available at https://fortune.com/2024/02/14/ai-girlfriends-boyfriends-artificial-intelligence-boom/ (accessed 22 February 2024).Google Scholar
Harvey, D (2007) Neoliberalism as creative destruction. Annals of the American Academy of Political and Social Science 610(1), 2244. https://doi.org/10.1177/0002716206296780.Google Scholar
Hoskins, A, Čimová, K and Pilkington, D (forthcoming) How AI makes us forget.Google Scholar
Hoskins, A and Halstead, H (2021) The new grey of memory: Andrew Hoskins in conversation with Huw Halstead. Memory Studies 14(3), 675685. https://doi.org/10.1177/17506980211010936.CrossRefGoogle Scholar
Jager, A (2024) Automated intimacy. Jacobin. Available at https://jacobin.com/2024/01/automated-intimacy (accessed 22 February 2024).Google Scholar
Landsel, D (2024) Here's what Jack the Ripper looked like according to AI. New York Post. Available at https://nypost.com/2024/02/16/lifestyle/heres-what-jack-the-ripper-looked-like-according-to-ai/?utm_medium=social&utm_campaign=nypost&utm_source=twitter (accessed 22 February 2024).Google Scholar
Leike, J and Sutskever, I (2023) Introducing superalignment. OpenAI Blog. Available at https://openai.com/blog/introducing-superalignment (accessed 22 February 2024).Google Scholar
Lenin, V (2021) Imperialism, the Highest Stage of Capitalism. Great Britain: The Leftist Public Domain Project, Amazon Supermarket Editions.Google Scholar
Lovely, G (2024) Can humanity survive AI? Jacobin. Available at https://jacobin.com/2024/01/can-humanity-survive-ai (accessed 22 February 2024).Google Scholar
Mandel, E (1990) Introduction. In Marx, K (ed.), Capital Volume 1. London: Penguin, 1186.Google Scholar
Maples, B, Cerit, M, Vishwanath, A and Pea, R (2024) Loneliness and suicide mitigation for students using GPT3-enabled chatbots. Mental Health Research 3(4), 16. https://doi.org/10.1038/s44184-023-00047-6.Google ScholarPubMed
Mearian, L (2024) What are LLMs, and how are they used in generative AI? Computer World. Available at https://www.computerworld.com/article/3697649/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html (accessed 22 February 2024).Google Scholar
Mengesha, S, Dunn, K and Luangrath, N (2024) The rise of AI and technology in immigration enforcement. The Regulatory Review, a Publication of the Penn Program on Regulation. Available at https://www.theregreview.org/2024/03/23/the-rise-of-ai-and-technology-in-immigration-enforcement/ (accessed 04 June 2024).Google Scholar
Molnar, P (2023) EU's AI act falls short on protecting rights at borders. Just Security. Available at https://www.justsecurity.org/90763/eus-ai-act-falls-short-on-protecting-rights-at-borders/ (accessed 26 March 2024).Google Scholar
Molnar, P (2024a) The deadly digital frontiers at the border. Time Magazine. Available at https://time.com/6979557/unregulated-border-technology-migration-essay/ (accessed 04 June 2024).Google Scholar
Molnar, P (2024b) The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence. New York: The New Press.Google Scholar
Montgomery, B and Paul, K (2024) US launches inquiry into AI deals by Microsoft, OpenAI, Google and Amazon. The Guardian. Available at https://www.theguardian.com/technology/2024/jan/25/ftc-ai-inquiry-microsoft-alphabet-amazon (accessed 22 February 2024).Google Scholar
Mosolova, D (2023) UK will refrain from regulating AI ‘in the short term’. Financial Times. Available at https://www.ft.com/content/ecef269b-be57-4a52-8743-70da5b8d9a65 (accessed 22 February 2024).Google Scholar
O'Shea, L (2024) Can AI be regulated? Europe is about to find out. Jacobin. Available at https://jacobin.com/2024/01/can-ai-be-regulated-europe-is-about-to-find-out (accessed 22 February 2024).Google Scholar
Parenti, M (1997) Blackshirts & Reds: Rational Fascism & the Overthrow of Communism. San Francisco: City Lights Books.Google Scholar
Stewart, E (2023) Do we really need an app for everything? Vox. Available at https://www.vox.com/money/23743915/iphone-android-apps-airline-dentist-pandemic-data-privacy-restaurant (accessed 06 June 2024).Google Scholar
Steyerl, H (2023) Common sensing? Machine learning, ‘enchatment’ and hegemony. New Left Review II(144), 6980. Available at https://newleftreview.org/issues/ii144/articles/hito-steyerl-common-sensing (accessed 26 March 2024).Google Scholar
Tulving, E (2002) Episodic memory: From mind to brain. Annual Review of Psychology 53, 125. https://doi.org/10.1146/annurev.psych.53.100901.135114.CrossRefGoogle ScholarPubMed
Tyler, H (2022) The Increasing Use of Artificial Intellig.. | migrationpolicy.org. Available at https://www.migrationpolicy.org/article/artificial-intelligence-border-zones-privacy (accessed 26 March 2024).Google Scholar
Usher, T (2024) Angry about Jake Paul vs Mike Tyson that's the whole point of ‘influencer boxing’. The Guardian. Available at https://www.theguardian.com/commentisfree/2024/apr/08/jake-paul-v-mike-tyson-influencer-boxing (accessed 06 June 2024).Google Scholar