12.1 Introduction
The infamous Australian Robodebt and application of COMPAS tool in the United States are just a few examples of abuse of power in the Automated State. However, our efforts to tackle these abuses have largely failed: corporations and states have used AI to influence many crucial aspects of our public and private lives, from our elections to our personalities and emotions, to environmental degradation through extraction of global resources to labour exploitation. And we do not know how to tame them. In this chapter I suggest that our efforts have failed because they are grounded in what I call procedural fetishism – an overemphasis and focus on procedural safeguards and assumption that transparency and due process can temper power and protect the interests of people in the Automated State.
Procedural safeguards, rules and frameworks play a valuable role in regulating AI decision-making and directing it towards accuracy, consistency, reliability, and fairness. However, procedures alone can be dangerous for legitimizing excessive power, and obfuscating the largest substantive problems we are facing today. In this chapter, I show how procedural fetishism acts as an obfuscation and redirection of the public from more substantive and fundamental questions about the concentration and limits of power to procedural micro-issues and safeguards in the Automated State. Such redirection merely reinforces the status quo. Procedural fetishism detracts from the questions of substantial accountability and obligations by diverting the attention to ‘fixing’ procedural micro-issues that have little chance of changing the political or legal status quo. The regulatory efforts and scholarly debate, plagued by procedural fetishism, have been blind to colonial AI extraction practices, labour exploitation, and dominance of the US tech companies, as if they did not exist. Procedural fetishism – whether corporate or state – is dangerous. Not only does it defer social and political change, it also legitimizes corporate and state influence and power under an illusion of control and neutrality.
To rectify the imbalance of power between people, corporations, and states, we must shift the focus from soft law initiatives to substantive accountability and tangible legal obligations by AI companies. Imposing data privacy obligations directly upon AI companies with an international treaty is one (but not the only) option. The viability of such an instrument has been doubted: human rights law and international law, so it goes, are state-centric. Yet, as data protection law illustrates, we already apply (even if poorly) certain human rights obligations to private actors. Similarly, the origins of international law date back to powerful corporations that were the ‘Googles’ and ‘Facebooks’ of their time. In parallel to such global instrument on data privacy, we must also redistribute wealth and power by breaking and taxing AI companies, increasing public scrutiny by adopting prohibitive laws, but also by democratizing AI technologies by making them public utilities. Crucially, we must recognize colonial AI practices of extraction and exploitation and paying attention to the voices of Indigenous peoples and communities of the so-called Global South. With all these mutually reinforcing efforts, a new AI regulation will resist procedural fetishism and establish a new social contract for the age of AI.
12.2 Existing Efforts to Tame AI Power
Regulatory AI efforts cover a wide range of policies, laws, and voluntary initiatives at national level, including domestic constitutions, laws and judicial decisions; regional and international instruments and jurisprudence; self-regulatory initiatives; and transnational non-binding guidelines developed by private actors and NGOs.
Many recent AI regulatory efforts aim to tackle private tech power with national laws. For example, in the United States, five bipartisan bills collectively referred to as ‘A Stronger Online Economy: Opportunity, Innovation and Choice’ have been proposed and seek to restrain tech companies’ power and monopolies.Footnote 1 In China, AI companies once seen as untouchables (particularly Alibaba and Tencent) have faced a tough year in 2021.Footnote 2 For example, the State Administration for Market Regulation (SAMR) took aggressive steps to rein in monopolistic behaviour, levying a record US$2.8 billion fine on Alibaba.Footnote 3 AI companies are also facing regulatory pressure in Australia targeting anti-competitive behaviour.Footnote 4
At a regional level, perhaps the strongest example of AI regulation is in the European Union, where several prominent legislative proposals have been tabled in recent years. The Artificial Intelligence Act,Footnote 5 and the Data ActFootnote 6 aim to limit the use of AI and ADM systems. These proposals build on the EU’s strong track record in the area: for example, EU General Data Protection Regulation (GDPR)Footnote 7 has regulated the processing of personal data. The EU has been leading AI regulatory efforts on a global scale, with its binding laws and regulations.
On an international level, many initiatives have attempted to draw the boundaries of appropriate AI use, often resorting to the language of human rights. For example, the Organisation for Economic Co-operation and Development (OECD) has adopted AI Principles in 2019,Footnote 8 which draw inspiration from international human rights instruments. However, despite the popularity of the human rights discourse in AI regulation, international human rights instruments, such as the International Covenant on Civil and Political RightsFootnote 9 or the International Covenant on Economic, Social and Cultural Rights,Footnote 10 are not directly binding on private companies.Footnote 11 Instead, various networks and organizations try to promote human rights values among AI companies.
However, these efforts to date have been of limited success in taming the power of AI, and dealing with global AI inequalities and harms. This weakness stems from the proceduralist focus of AI regulatory discourse: proponents have assumed that procedural safeguards, transparency and due process can temper power and protect the interests of people against the power wielded by AI companies (and the State) in the Automated State. Such assumptions stem from the liberal framework, focused on individual rights, transparency, due process, and procedural constrains, which, to date, AI scholarship and regulation have embraced without questioning their capacity to tackle power in the Automated State.
The assumptions are closely related to the normative foundations of AI and automated decision-making systems (ADMS) governance, which stem, in large part, from a popular analogy between tech companies and states: how AI companies exert quasi-sovereign influence over commerce, speech and expression, elections, and other areas of life.Footnote 12 It is also this analogy, and the power of the state as the starting point, that leads to the proceduralist focus and emphasis in AI governance discourse: just as the due process and safeguards constrain the state, they must now also apply to powerful private actors, like AI companies. Danielle Keats Citron’s and Frank Pasquale’s early groundbreaking calls for technological due process have been influential: it showed how constitutional principles could be applied to technology and automated decision-making – by administrative agencies and private actors.Footnote 13 Construction of various procedural safeguards and solutions, such as testing, audits, algorithmic impact assessments, and documentation requirements have dominated AI decision-making and ADMS literature.Footnote 14
Yet, by placing all our energy on these procedural fixes, we miss the larger picture and are blind to our own coloniality: we rarely (if at all) discuss the US dominance in AI economy, we seldom mention environmental exploitation and environmental degradation caused by AI and AMDS technologies. We rarely ask how AI technologies reinforce existing power disparities globally between the so-called Global South and Imperialist West/North, how they contribute to climate disaster and exploitation of people and extraction of resources in the so-called Global South. These substantive issues matter, and arguably matter more than a design of a particular AI auditing tool. Yet, we are too busy designing the procedural fixes.
To be successful, AI regulation must resist what I call procedural fetishism – a strategy, employed by AI companies and state actors, to redirect the public from more substantive and fundamental questions about the concentration and limits of power in the age of AI to procedural safeguards and micro-issues. This diversion reinforces the status quo, reinforces Western dominance, accelerates environmental degradation and exploitation of the postcolonial peoples and resources.
12.3 Procedural Fetishism
Proceduralism, in its broadest sense, refers to ‘a belief in the value of explicit, formalized procedures that need to be followed closely’,Footnote 15 or ‘the tendency to believe that procedure is centrally important’.Footnote 16 The term is often used to describe the legitimization of rules, decisions, or institutions through the process used to create them, rather than by their substantive moral value.Footnote 17 Such trend towards proceduralism – or what I call procedural fetishism – also dominates our thinking about AI: we believe that having certain ‘safeguards’ for AI systems is inherently valuable, that those safeguards tame power and provide sufficient grounds to trust the Automated State. However, procedural fetishism undermines our efforts for justice for several reasons.
First, procedural fetishism offers an appearance of political and normative neutrality, which is convenient to both AI companies and policymakers, judges, and regulators. Proceduralism allows various actors to ‘remain agnostic towards substantive political and moral values’ when ‘faced with the pluralism of contemporary societies’.Footnote 18 At the ‘heart’ of all proceduralist accounts of justice, therefore, is the idea that, as individual members of a pluralist system, we may agree on what amounts to a just procedure (if not a just outcome), and ‘if we manage to do so, just procedures will yield just outcomes’.Footnote 19 However, procedural fetishism enables various actors not only to remain agnostic, but to avoid confrontation with hard political questions. For example, the courts engage in procedural fetishism to appear neutral and avoid tackling the politically difficult questions of necessity, proportionality, legitimacy of corporate and state surveillance practices, and have instead come up with procedural band-aids.Footnote 20 The focus on procedural safeguards provides a convenient way to make an appearance of effort to regulate without actually prohibiting any practices or conduct.
A good example of such neutralizing appearance of procedural fetishism is found in the AI governance’s blind eye to very important policy issues impacted by AI, such as climate change, environmental degradation, and continued exploitation of the resources from the so-called Third World countries. The EU and US-dominated AI debate has focused on inequalities reinforced through AI in organizational settings in business and public administration, but it has largely been blind to the inequalities of AI on a global scale,Footnote 21 including global outsourcing of labour,Footnote 22 and the flow of capital through colonial and extractive processes.Footnote 23 While it is the industrial nations in North America, Europe, and East Asia who compete in the ‘race for AI’,Footnote 24 AI and ADM systems depend on global resources, most often extracted from the so-called Global South.Footnote 25 Critical AI scholars have analyzed how the production of capitalist surplus for a handful of big tech companies draws on large-scale exploitation of the soil, minerals, and other resources.Footnote 26 Other critical scholars have described the processes of extraction and exchange of personal data itself as a form of dispossession and data colonialism.Footnote 27 Moreover, AI and ADMs systems have also been promoted as indispensable tools in international developmentFootnote 28 but many have pointed how those efforts often reinforce further colonization and extraction.Footnote 29 Procedural fetishism also downplays the human labour involved in AI technologies, which draws on the underpaid, racialized, and not at all ‘artificial’ human labour primarily from the so-called Global South. The AI economy is one in which highly precarious working conditions for gig economy ‘click’ workers are necessary for the business models of AI companies.
12.3.1 Legitimizing Effect of Procedural Fetishism
Moreover, procedural fetishism is used strategically not only to distract from power disparities but also to legitimize unjust and harmful AI policies and actions by exploiting people’s perceptions of legitimacy and justice. As early as in the 1980s, psychological research undermined the traditional view that substantive outcomes drove people’s perception of justice by showing that it was more about the procedure for reaching the substantive outcome.Footnote 30 Many of the ongoing proceduralist reforms, such as Facebook’s Oversight Board, are primarily conceived for this very purpose – to make it look that Facebook is doing the ‘right thing’ and delivering justice, irrespective of whether substantive policy issues change or not. Importantly, such corporate initiatives divert attention from the problems caused by the global dominance of the AI companies.Footnote 31
The language of ‘lawfulness’ and constitutional values, prevalent in AI governance debates, is working as a particularly strong legitimizing catalyst both in public and policy debates. As critical scholars have pointed out, using the terminology, which is typically employed in context of elected democratic governments, misleads, for it infuses AI companies with democratic legitimacy, and conflates corporate interests with public objectives.Footnote 32
In the following sections, I suggest that this language is prevalent not accidentally, but through sustained corporate efforts to legitimize their power and business models, to avoid regulation, and enhance their reputation for commercial gain. AI companies often come up with private solutions to develop apparent safeguards against their own abuse of power and increase their transparency to the public. Yet, as I have argued earlier, many such corporate initiatives are designed to obfuscate and misdirect policymakers, researchers, and the public in the bid to strengthen their brand and avoid regulation and binding laws.Footnote 33 AI companies have also successfully corporatized and attenuated the laws and regulations that bind them. Through many procedures, checklists, and frameworks, corporate compliance with existing binding laws has often been a strategic performance, devoid of substantial change in business practices. Such compliance has worked to legitimize business policy and corporate power to the public, regulators, and the courts. In establishing global dominance, AI companies have also been aided by the governments.
12.3.2 Procedural Washing through Self-Regulation
First, corporate self-regulatory AI initiatives are often cynical marketing and social branding strategies to increase public confidence in their operations and create a better public image.Footnote 34 AI companies often self-regulate selectively by disclosing and addressing only that which is commercially desirable for them. For example, Google, when creating an Advanced Technology External Advisory Council (Council) in 2019 to implement Google’s AI Principles,Footnote 35 refused to reveal the internal processes that led to the selection of a controversial member, anti-LGBTI advocate and climate change denial sponsor Kay Coles James.Footnote 36 While employees’ activism forced Google to rescind the Council, ironically, this showed Google’s unwillingness to publicly share the selection criteria of their AI governance boards.
Second, AI companies self-regulate only if it pays off for them in the long run, so profit is the main concern.Footnote 37 For example, in 2012 IBM provided police forces in Philippines with video surveillance technology which was used to perpetuate President Duterte’s war on drugs through extrajudicial killings.Footnote 38 At the time, IBM defended the deal with Philippines, saying it ‘was intended for legitimate public safety activities’.Footnote 39 The company’s practice of providing authoritarian regimes with technological infrastructure is not new and dates back to the 1930s when IBM supplied the Nazi Party with unique punch-card technology that was used to run the regime’s censuses and surveys to identify and target Jewish people.Footnote 40
Third, corporate initiatives also allow AI companies to prevent any regulation of their activities. A good example of pro-active self-regulation is Facebook’s Oversight Board, which reviews individual decisions, and not overarching policies. Thus, the attention is still diverted away from critiquing the legitimacy or appropriateness of Facebook’s AI business practices themselves and is instead focused on Facebook’s ‘transparency’ about them. The appropriateness of the substantial AI policies themselves are obfuscated, or even legitimated, through the micro procedural initiatives, with little power to change status quo. In setting up the board, Facebook has attempted not only to stave off regulation, but also to position itself as an industry regulator by inviting competitors to use the Oversight Board as well.Footnote 41 AI companies can then depict themselves as their own regulators.
12.3.3 Procedural Washing through Law and Help of State
Moreover, AI companies (and public administrations) have also exploited the ambiguity of laws regulating their behaviour through performative compliance with the laws. Often, policymakers have compounded this problem by creating legal provisions to advance the proceduralist agenda of corporations, including via international organizations and international law, and regulators and courts have enabled corporatized compliance in applying these provisions by focusing on the quality of procedural safeguards.
For instance, Ezra Waldman has shown how the regulatory regime of data privacy, even under the GDPR – the piece of legislation which has gained the reputation as the strongest and most ambitious law in the age of AI – has been ‘managerialized’: interpreted by compliance professionals, human resource experts, marketing officers, outside auditors, and in-house and firm lawyers, as well as systems engineers, technologists, and salespeople to prioritize values of efficiency and innovation in the implementation of data privacy law.Footnote 42 As Waldman has argued, many symbolic structures of compliance are created; yet, apart from an exhaustive suite of checklists, toolkits, privacy roles, and professional training, there are hardly substantial actions to enhance consumer protection or minimize online data breaches.Footnote 43 These structures comply with the law in name but not in spirit, which is treated in turn by lawmakers and judges as best practice.Footnote 44 The law thus fails to achieve its intended goals as the compliance metric developed by corporations becomes dominant,Footnote 45 and ‘mere presence of compliance structures’ is assumed to be ‘evidence of substantive adherence with the law’.Footnote 46 Twenty-six recent studies analyzed the impact of the GDPR and US data privacy laws and none have found any meaningful influence of these laws on data privacy protection of the people.Footnote 47
Many other laws itself have been designed in the spirit of procedural fetishism, enabling corporations to avoid liability and change their substantive policies by simply establishing proscribed procedures. For example, known as ‘safe harbours’, such laws enable the companies to avoid liability by simply following a prescribed procedure. For example, under the traditional notice-and-consent regime in the United States, companies avoid liability as long as they post their data use practices in a privacy policy.Footnote 48
Regulators and the courts, by emphasizing procedural safeguards, also engage in performative regulation, grounded in procedural fetishism, that limits pressure for stricter laws by convincing citizens and institutions that their interests are sufficiently protected without inquiring substantive legality of corporate practices. A good example is Federal Trade Commission’s (FTC) audits and ‘assessment’ requirements, which require corporations to demonstrate compliance through checklists.Footnote 49 Similar procedural fetishism is also prevalent in jurisprudence, which does not assess specific state practices by reference to their effectiveness in advancing the proclaimed goals, but rather purely to the stringency of the procedures governing that practice.Footnote 50
12.3.4 Procedural Washing through State Rhetoric and International Law
Procedural washing by AI companies have also been aided by executive governments – both through large amounts of public funding and subsidization to these companies, and through the development of the laws, including international laws, that suit corporate and national agenda. Such support is not one-sided, of course, the state expands its economic and geopolitical power through technology companies. All major powers, including the United States, European Union, and China, have been active in promoting their AI companies. For example, mutually beneficial and interdependent relationship between the US government and information technology giants has been described as the information-industrial-complex, data industrial complex, and so on.Footnote 51 These insights build on Herbert Schiller’s work, who described the continuous subsidization by US companies of private communications companies back in the 1960s and 1970s.Footnote 52 For example, grounding their work on classical insights, Powers and Jablonski describe how the dynamics of the information-industrial-complex have catalyzed the rapid growth of information and communication technologies within the global economy while firmly embedding US strategic interests and companies at the heart of the current neoliberal regime.Footnote 53 Such central strategic position necessitates continuous action and support from the US government.
To maintain the dominance of US AI companies internationally, the US government aggressively promotes the global free trade regime, intellectual property enforcement, and other policies that suit US interests. For example, the dominance of US cultural and AI products and services worldwide is secured via the free flow of information doctrine at the World Trade Organization, which the US State Department pushed with the GATT, GATS, and TRIPS.Footnote 54 The free flow of information doctrine allows the US corporations to collect and monetize personal data of individuals from around the world. This way, data protection and privacy are not part of the ‘universal’ values of the Internet, whereas strong intellectual property protection is not only viable and doable, but also strictly enforced globally.
Many other governments have also been complicit in this process. For example, the EU AI Act, despite its declared mission to ‘human centred AI’ is silent about the environmental degradation and social harms that occur in other parts of the world because of large-scale mineral and resource extraction and energy consumption, necessary to produce and power AI and digital technologies.Footnote 55 The EU AI Act is also silent on the conditions under which AI is produced and the coloniality of the AI political economy: it does not address precarious working conditions and global labour flows. Thus, EU AI Act is also plagued by procedural fetishism: it does not seek to improve the global conditions for an environmentally sustainable AI production. Thus, at least the United States and EU have prioritized inaction, self-regulation over regulation, no enforcement over enforcement, and judicial acceptance over substantial resistance. While stressing the differences in US and EU regulatory approaches has been popular,Footnote 56 the end result has been very similar both in the EU and the United States: the tech companies collect and exploit personal data not only for profit, but for political and social power.
In sum, procedural fetishism in AI discourse is dangerous for creating an illusion that it is normatively neutral. Our efforts at constraining AI companies are replaced with the corporate vision of division of power and wealth between the corporations and the people, masked under the veil of neutrality.
12.4 The New Social Contract for the Age of AI
The new social contract for the age of AI must try something different: it must shift its focus from soft law initiatives and performative corporate compliance to substantive accountability and tangible legal obligations by AI companies. Imposing directly binding data privacy obligations on AI companies with an international treaty is one (but not the only!) option. Other parallel actions include breaking and taxing tech companies, increasing competition and public scrutiny, and democratizing AI companies: involving people in their governance.
12.4.1 International Legally Binding Instrument Regulating Personal Data
One of the best ways to tame AI companies is via the ‘currency’ which people often ‘pay’ for their services – the personal data. And the new social contract should not only be concerned with the procedures that AI companies should follow in continuing to exploit personal data. Instead, it should impose substantive limits on corporate AI action, for example, data cannot be collected and used in particular circumstances, how and when it can be exchanged, manipulative technologies and biometrics are banned to ensure mental welfare, and social justice.
Surely, domestic legislators should develop such laws (and I discuss that below too). However, given that tech companies exploit our data across the globe, we need a global instrument to lead our regulatory AI efforts. Imposing directly binding obligations on AI companies with an international treaty should be one (but not the only!) option. While exact parameters of such treaty are beyond the scope of this chapter, I would like to rebut one misleading argument, often used by the AI companies, that private companies cannot have direct obligations under international law.
The relationship between private actors and international law has been a subject of intense political and scholarly debate for over four decades,Footnote 57 since the first attempts to develop a binding international code of conduct for multinational corporations in the 1970s.Footnote 58 Most recent efforts have led to the ‘Third Revised Draft’ of the UN Treaty on Business and Human Rights released in 2021, since the process started with the so-called Ecuador Resolution in 2014.Footnote 59 The attempts to impose binding obligations on corporations have not yet been successful because of enormous political resistance from private actors, for whom such developments would be costly. Corporate resistance entail many fronts, here I can only focus on debunking a corporate myth that such constitutional reform is not viable, and even legally impossible because of the state-centric nature of human rights law. Yet, as data protection law, discussed above, illustrates, we already apply (even if poorly) certain human rights obligations to private actors. We can and should demand more from corporations in other policy areas.
Importantly, we must understand the role of private actors under international law. Contrary to the popular myth that international law was created by and for nation-states, ‘[s]ince its very inception, modern international law has regulated the dealings between states, empires and companies’.Footnote 60 The origins of international law itself date back to powerful corporations that were the Googles and Facebooks of their time. Hugo Grotius, often regarded as the father of modern international law, was himself counsel to the Dutch East India Company – the largest and most powerful corporation in history. In this role, Grotius’ promotion of the principle of the freedom of the high seas and his views on the status of corporations were shaped by the interests of the Dutch East India Company to ensure the security and efficacy of the company’s trading routes.Footnote 61 As Peter Borschberg explains, Grotius crafted his arguments to legitimize the rights of the Dutch to engage in the East Indies trade and justify the Dutch Company’s violence against the Portuguese, who claimed exclusive rights to Eastern Hemisphere.Footnote 62 In particular, Grotius aimed to justify the seizure by Dutch of the Portuguese carrack Santa Catarina in 1603:
[E]ven though people grouped as a whole and people as private individuals do not differ in the natural order, a distinction has arisen from a man-made fiction and from the consent of citizens. The law of nations, however, does not recognize such distinctions; it places public bodies and private companies in the same category.Footnote 63
Grotius argued that moral personality of individuals and collections of individuals do not differ, including, to what was for Grotius, their ‘natural right to wage war’. Grotius concluded that ‘private trading companies were as entitled to make war as were the traditional sovereigns of Europe’.Footnote 64
Therefore, contrary to the popular myth, convenient to AI companies, the ‘law of nations’ has always been able to accommodate private actors, whose greed and search for power gave rise to many concepts of modern international law. We must therefore recognize this relationship and impose hard legal obligations related to AI on companies under international law precisely to prevent tech companies’ greed and predatory actions which have global consequences.
12.4.2 Increased Political Scrutiny and Novel Ambitious Laws
We must also abolish the legislative regimes that have in the past established safe harbours for AI companies, such as the EU-US Transatlantic Privacy Framework,Footnote 65 previously known as Safe Harbour and Privacy Shield. Similarly, regimes, based on procedural avoidance of liability, such as the one under Section 230 of the US Communications Decency Act 1996, should be reconsidered. This provision provides that websites should not treated as the publisher of third party (i.e., user submitted content); and it is particularly useful for platforms like Facebook.
Some of the more recent AI regulatory efforts might be displaying first seeds of substantive-focused regulation. For example, many moratoriums have been issued on the use of facial recognition technologies across many municipalities and cities in the United States, including the state of Oregon, and NYC.Footnote 66 In EU too, some of the latest proposals also display an ambition to ban certain uses and abuses of technology. For example, the Artificial Intelligence Act provides a list of ‘unacceptable’ AI systems and prohibits their use. The Artificial Intelligence Act has been subject to criticism about its effectiveness,Footnote 67 yet its prohibitive approach can be contrasted with earlier EU regulations, such as GDPR, which did not proclaim that certain areas should not be automated, or some data should not be processed at all/ fall in the hands of tech companies. On an international level, the OECD has recently announced a landmark international tax deal, where 136 countries and jurisdictions representing more than 90 per cent of global GDP agreed to minimum corporate tax rate of 15 per cent on the biggest international corporations which will be effective in 2023.Footnote 68 While this is not tackling tech companies business practices, it is aimed at fairer redistribution of wealth, which too must be the focus of the new social contract, if we wish to restrain the power of AI.
12.4.3 Breaking AI Companies and Public Utilities Approach
We must also break AI companies many of which have grown so large that they are effectively gatekeepers in their markets. Many scholars have recently proposed ways to employ antitrust and competition law to deal with and break big tech companies,Footnote 69 and such efforts are also visible on political level. For example, in December 2020, the EU Commission published a proposal for two new pieces of legislation: the Digital Markets Act (DMA) and the Digital Services Act (DSA).Footnote 70 The proposal aims to ensure platform giants, such as Google, Amazon, Apple, and Facebook, operate fairly, and to increase competition in digital markets.
We already have legal instruments for breaking the concentration of power in AI sector: for example, the US Sherman Act 1890 makes monopolization unlawful.Footnote 71 And we must use the tools of competition and antitrust law (but not only them!) to redistribute the wealth and power. While sceptics argue Sherman Act case against Amazon, Facebook, or Google would not improve economic welfare in the long run,Footnote 72 we must start somewhere. For instance, as Kieron O’Hara suggested, we could prevent anticompetitive mergers and require tech giants to divest companies they acquired to stifle competition, such as Facebook’s acquisition of WhatsApp and Instagram.Footnote 73 We could also ring-fence giants into particular sectors. For example, Amazon’s purchase of Whole Foods Market (a supermarket chain) would likely be prevented by that strategy. We could also force tech giants to split its businesses into separate corporations.Footnote 74 For instance, Amazon would be split into its E-commerce platform, physical stores, web services, and advertising business.
However, antirust reforms should not obscure more radical solutions, suggested by critical scholars. For example, digital services could be conceived as public utilities: either as closely regulated private companies or as government-run organizations, administered at municipal, state, national, or regional levels.Footnote 75 While exact proposals of ‘Public utility’ approach vary, they aim at placing big AI companies (and other big enterprises) under public control.Footnote 76 This provides a strong alternative to market-driven solutions to restore competition in technology sector, and has more potential to address the structural problems of exploitation, manipulation, and surveillance.Footnote 77
12.4.4 Decolonizing Technology Infrastructure
We should also pay attention to the asymmetries in economic and political power on global scale: this covers both the US dominance in the digital technologies and AI, US influence in shaping international free trade and intellectual property regimes, rising influence of China, as well as EU’s ambitions to set global regulatory standards in many policy areas and both business and public bodies in the so-called Global South on the receiving end of Brussels demands of what ‘ethical’ AI is, and how ‘data protection’ must be understood and implemented.Footnote 78
We should also incorporate Indigenous epistemologies – they provide strong conceptual alternatives to dominant AI discourse. Decolonial ways to theorize, analyze, and critique AI and ADMS systems must be part of our new social contract for the age of AI,Footnote 79 because people in the so-called Global South relate very differently to major AI platforms than those who live and work where these companies are headquartered.Footnote 80 A good example in this regard is the ‘Technologies for Liberation’ project which studies how queer, trans, two-spirit, black, Indigenous, and people of colour communities are disproportionately impacted by surveillance technologies and criminalization.Footnote 81 Legal scholars must reach beyond our comfortable Western, often Anglo-Saxon position, and bring forward perspectives of those who have been excluded and marginalized in the development of AI and ADMS tools.
The decolonization however must also happen in laws. For example, the EU’s focus on regulating AI and ADMS as a consumer ‘product-in-use’ requiring individual protection is hypocritical, and undermines the claims to regulate ‘ethical’ AI, for it completely ignores the exploitative practices and global implications of AI production and use. These power disparities and exploitation must be recognized and officially acknowledged in the new laws.
Finally, we need novel spaces for thinking about, creating and developing the new AI regulation. Spaces that are not dominated by procedural fetishism. A good example of possible resistance, promoted by decolonial data scholars, is a Non-Aligned Technologies Movement (NATM) – a worldwide alliance of civil society organizations which aims to create ‘techno-social spaces beyond the profit-motivated model of Silicon Valley and the control-motivated model of the Chinese Communist Party. NATM does not presume to offer a single solution to the problem of data colonialism; instead it seeks to promote a collection of models and platforms that allow communities to articulate their own approaches to decolonization’.Footnote 82
12.5 Conclusion
The new social contract for the age of AI must incorporate all these different strategies – we need a new framework, and not just quick, procedural fixes. These strategies might not achieve substantive policy change alone. However, together, acting in parallel, the proposed changes will enable us to start resisting corporate and state agenda of procedural fetishism. In the digital environment dominated by AI companies, procedural fetishism is an intentional strategy to obfuscate the implications of concentrated corporate power. AI behemoths legitimize their practices through procedural washing and performative compliance to divert the focus onto the procedures they follow, both for commercial gain and to avoid their operations being tempered by regulation. They are also helped and assisted by states, which enable corporate dominance via the laws and legal frameworks.
Countering corporate procedural fetishism, requires, first of all, returning the focus back to the substantive problems in the digital environment. In other words, it requires paying attention to the substance of tech companies’ policies and practices, to their power, not only the procedures. This requires a new social contract for the age of AI. Rather than buying into procedural washing as companies intend for us to do, we need new binding, legally enforceable mechanisms to hold the AI companies to account. We have many options, and we need to act on all fronts. Imposing data privacy obligations directly on AI companies with an international treaty is one way. In parallel, we must also redistribute wealth and power by breaking and taxing tech companies, increasing public scrutiny by adopting prohibitive laws, and democratizing and decolonizing big tech by giving people power to determine the way in which these companies should be governed. We must recognize that AI companies exercise global dominance with significant international and environmental implications. This aspect of technology is related to global economic structure, and therefore cannot be solved alone: it requires systemic changes to our economy. The crucial step to such direction is developing and maintaining AI platforms as public utilities, which operate for the public good rather than profit. The new social contract for the age of AI should de-commodify data relations, rethink behaviour advertising as the foundation of the Internet, and reshape social media and internet search as public utilities. With all these mutually reinforcing efforts, we must debunk the corporate and state agenda of procedural fetishism and demand basic tangible constraints for the new social contract in the Automated State.