I. INTRODUCTION
A world in which machines can learn to identify cancer more accurately than doctors, predict which criminals will reoffend and even drive cars once belonged to the realm of science fiction. Today, the benefits of artificial intelligence (AI) are being reaped worldwide, supporting human intelligence in myriad ways. However, despite its undeniable merits, AI has the potential to wreak havoc with the enjoyment of human rights—and cases of discrimination, violations of privacy, loss of jobs and negative impacts on access to public services increasingly feature in international headlines.
The relationship between AI and human rights is irrefutable, and support for an approach to AI that is based on the international human rights law framework has grown within academia and practice.Footnote 1 However, despite ongoing developments, the international human rights law framework concerning AI is far from being fully evolved, particularly regarding the role and responsibilities of the different actors active within it, and particularly private businesses who are developing and deploying AI. Questions remain concerning the boundaries of responsibility of such actors and how they can be held to account. Furthermore, many AI governance initiatives focus on AI ethics, rather than human rights. Although such initiatives often contain principles or guidelines for both States and businesses which correlate with human rights standards, the way in which such instruments contribute to the protection of human rights is not always obvious. Clarity is desperately needed not only for individuals whose rights are affected, but also for other relevant stakeholders, such as States and businesses developing or deploying AI.
This article addresses the question of whether, and to what extent, AI ‘law-and-governance’ initiatives at the international, regional and national levels engage with human rights and contribute to legal certainty regarding the application of international human rights law in the context of the development and deployment of AI. Taking a broader law-and-governance approach and assessing the initiatives from a human rights law perspective, the article builds on previous scholarship concerning the legal certainty of international law regarding AIFootnote 2 as well as on studies mapping AI ethics guidelines and recommendations.Footnote 3
First, the article introduces the issue of legal certainty concerning AI and international human rights law (Section II). The current state of the law relating to AI and the position of businesses under international human rights law is summarised. The causes of legal uncertainty in international human rights law and the importance of filling gaps in understanding are also highlighted. Sections III–V provide an assessment of AI law-and-governance initiatives at the international level (Section III); regional level (Section IV); and national level (Section V), respectively. These various initiatives are then assessed in the light of their contribution towards the clarification of human rights standards applicable to States and businesses. Section VI comprises a comparative analysis of the initiatives and conclusions are provided in Section VII.
Throughout the article, a law-and-governance approach is adopted, in which solutions to societal problems are sought beyond the confines of the law.Footnote 4 This interdisciplinary approach views laws as tools of governance: as part of a governance structure comprising activities undertaken by many different actors, both public and private.Footnote 5 In this article the term ‘governance’ is used to refer to the ‘drafting, adopting, implementing and enforcing rules, or standards, [as well as] the mechanisms, processes and institutions that exist to achieve these tasks’.Footnote 6 Governance is understood to reach beyond governmental activities to comprise the aforementioned tasks ‘independent of the numbers and kinds of actors carrying [them] out’.Footnote 7 This reflects the current reality that governments often rely on non-governmental (that is, non-State) actors to undertake various governance tasks and ‘secure its intentions, deliver its policies, and establish a pattern of rule’.Footnote 8 This regularly occurs in the non-State provision of public services, including water, healthcare and education. Recently, there has been increasing reliance on AI developed by private businesses to help conduct various governance activities, for example in the criminal justice sector or in the context of smart cities, with questionable results from a human rights perspective.Footnote 9 However, non-State actors may also take it upon themselves to perform governance activities in order to address glaring governance gaps. Arguably, this is what has led to the wide array of governance initiatives concerning AI being adopted by a variety of governmental and non-State actors.
This article is based on an assessment of 99 initiatives which contain legally binding and/or non-binding standards directly related to AI (eg standards or strategies for the development of ethical AI) or standards that do not expressly address AI but which nevertheless have an impact on how it is developed and deployed. These include international guidelines and principles, self-regulatory instruments adopted by businesses, regional (and particularly European) legislation and governance instruments, and national (and sub-national) legislation, policy and governance documents. Authors of these instruments range from States to international organisations, non-governmental organisations, independent expert groups, business enterprises and more. The initiatives were largely identified using two databases: AI Ethics Lab's ‘Toolbox: Dynamics of AI Principles’;Footnote 10 and Nesta's ‘AI Governance Database’.Footnote 11 Further relevant initiatives were identified in academic literature and the media, as well as in the sources found in the databases. The scope of the analysis is limited by the research method adopted and does not claim to be exhaustive.Footnote 12 However, the initiatives examined provide a cross-section of significant existing initiatives and sufficient material from which to draw the conclusions arrived at below.
II. The Problem(s) of Legal Uncertainty
The ongoing development of AI technologies presents international law with a number of challenges. These include (i) the need for new laws; (ii) legal certainty; (iii) incorrect scope of existing laws; and (iv) legal obsolescence.Footnote 13 While legal certainty has long been an issue in relation to various areas of international law,Footnote 14 the present article focuses on the problem of legal certainty in the specific context of international human rights law and AI.Footnote 15 In particular, it addresses what Rebecca Crootof and BJ Ard label ‘application uncertainties’ that concern ‘indeterminacy as to whether and how existing law applies to an artifact, actor, or activity’.Footnote 16 The importance of legal certainty in this context is outlined below.
This article takes as a starting point that ‘[t]he demand for [legal] certainty creates a pressure for clear and precise rules, so that everyone knows where they stand’.Footnote 17 The need for clear, precise, accessible and consistent rules that allow actors to understand not only what their rights and obligations are, but also the consequences for not conforming to those rules, has been repeated on many occasions over many years.Footnote 18 This, in essence, is a call for legal certainty.
A link can also be made between legal certainty on the one hand and accountability and access to remedies on the other. If it is not known what an actor involved in the development/deployment of AI is responsible for, it is extremely challenging, if not impossible, to effectively hold them to account and ensure an effective remedy for victims of resulting human rights violations.Footnote 19 With private businesses playing a leading role in the development and deployment of AI, legal certainty is crucial for three stakeholders in particular:
I. Victims of human rights abuses caused by reliance on AI systems. These actors need to know: Under what circumstances can they claim a violation of their rights? Where can they make such claims? These questions have a huge impact on victims’ ability to seek access to an effective remedy.
II. State entities. These actors need to know: How are they expected to respect, protect and fulfil human rights in situations involving AI? In particular, what regulatory (and other) measures are States expected to undertake in order to protect human rights? Where do the obligations of States begin and end when an AI system that they use was developed by the private sector but is used by the public sector? How do the responsibilities and/or obligations of differing AI actors relate to each other?Footnote 20
III. Private businesses developing and/or deploying AI. These actors need to know: What binding or non-binding standards should they follow? How do their responsibilities relate to those of other AI actors? What does this mean for them in their everyday operations and in relation to their specific AI system/s?
This article does not aspire to answer all these questions, which are crucial for achieving legal certainty and which could provide insights for potential future primary sources of international law on AI. Rather, and bearing in mind the limitations of international human rights law regarding AI and human rights, the article argues that some of the answers can be found in the broad range of AI law-and-governance initiatives that have been adopted at the international, regional and national levels.
The remainder of this section will focus on three main areas where greater legal certainty and clarity are needed: (A) gaps in the law; (B) AI, business and human rights; and (C) the abundance of AI (ethics) governance initiatives.
A. Gaps in the Law
Gaps in international human rights law concerning AI contribute significantly to legal uncertainty.Footnote 21 There is currently no express reference to AI in any of the primary sources listed in Article 38 of the Statute of the International JusticeFootnote 22 (international human rights treaties, customary international law and general principles of international law).Footnote 23 As a result, reliance must be placed on subsidiary sourcesFootnote 24 and interpretations of existing law to indicate how the more general standards found in primary sources (predominantly human rights treaties) apply in the context of AI. This is not specific to AI. However, because the human rights risks of AI have only relatively recently come to the fore, authoritative international interpretations of how international human rights law applies to AI are very limited. For instance, whilst there have been several key cases concerning AI and human rights at the national and regional levels showing how the right to privacy can apply to AI,Footnote 25 these are limited in scope and application. Judicial decisions at the international level do not yet exist. Even if such cases materialise in the future, they will be restricted to the subject matter of the specific case and only bind the parties to them.Footnote 26
That said, there are an increasing number of important comments by a variety of UN actors in the human rights sphere. For example, the UN Committee on Economic, Social and Cultural Rights (CteeESCR) adopted a general comment on the right to science in 2020, in which it discusses some of the risks posed and benefits offered by AI for human rights.Footnote 27 Significantly, the Committee stresses the need for States to ‘establish a legal framework that imposes on non-state actors a duty of human rights due diligence’.Footnote 28 This builds on its previous General Comments that have also emphasised the need for State regulation of non-State actors, especially businesses.Footnote 29 Further, in March 2021 the UN Committee on the Rights of the Child (CteeRC) adopted a general comment on the rights of children in relation to the digital environment.Footnote 30 Like that of the CteeESCR's, the CteeRC's general comment does not focus only on AI, which is expressly mentioned only once. However, it sheds light on the numerous ways in which AI can negatively impact children's rights and directly addresses the role and responsibilities of businesses (and States with regard to businesses).Footnote 31
Several reports have also been adopted by special procedures of the UN Human Rights Council, such as the Special Rapporteur on Extreme Poverty and Human Rights and the Special Rapporteur on Freedom of Expression.Footnote 32 The former Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston, even submitted an amicus curiae brief in relation to a national case heard in the Netherlands which concerned the right to privacy under the European Convention on Human Rights and a piece of Dutch legislation that allowed the use of AI for risk-profiling in the welfare sector.Footnote 33 Further, in September 2021 the UN High Commissioner for Human Rights published a report on the right to privacy in the digital age. In the report, High Commissioner Michelle Bachelet discusses the many risks that AI poses to privacy and provides suggestions for safeguards that should be designed and implemented by both States and the private sector to prevent and mitigate them.Footnote 34 Nonetheless, there are still many gaps and uncertainties about how obligations and responsibilities under international human rights law apply to AI, particularly beyond of the realm of privacy and data protection.
A significant contributing factor to these gaps is the limited ability of the current legal international human rights framework to keep abreast of developments relating to AI.Footnote 35 Irrespective of whether a new international treaty addressing AI and human rights would be desirable, the adoption of new multilateral treaties can be a slow and painstaking processes.Footnote 36 Customary international law on AI and human rights would also take time to develop since ‘there is no such thing as “instant custom”’.Footnote 37
Furthermore, international human rights law tends to develop in response to specific allegations of human rights violations. There have been many attempts by human rights adjudicatory bodiesFootnote 38 and in scholarly worksFootnote 39 to clarify the more preventive responsibilities and obligations under international human rights law, for instance through the delineation of due diligence obligations. However, the question of how these apply to the specific situation of AI remains. In particular, the nature of some AI technologies can make it extremely difficult to foresee both their capabilities and the risks of their use. Machine learning systems have been particularly criticised in this respect. They can be highly complex and continue to teach themselves over time to become more efficient. Machine learning systems may follow paths that were not envisaged by the programmer developing the system. As Matthew Scherer notes, this may even be the intention of the programmer and the appeal of a particular system, since it enables the machine to come to conclusions that would not have been made by humans.Footnote 40 Such unpredictability makes it difficult for lawmakers to ‘future proof’ the law, a challenge with which regulators on all levels are grappling.Footnote 41 Another challenge is the relative unpredictability of how existing AI technologies will be used in the future. A worrying example is the deployment of AI systems in military contexts when they were not intended for such use.Footnote 42
Ultimately, the unique phenomenon of AI poses numerous challenges to international human rights law. Despite some more recent observations from human rights bodies, there is still a considerable lack of legal certainty as to what exactly is expected from AI actors concerning AI and human rights.
B. AI, Business and Human Rights
For businesses, the lack of certainty is exacerbated by the absence of any binding human rights obligations. There are two main avenues for determining the international human rights standards to which businesses should be held: (I) the ‘indirect horizontal effect of human rights’; and (II) soft-law instruments concerning the corporate responsibility to respect human rights. Indirect horizontal effect most commonly involves the State's positive obligation to protect human rights from harmful interference by non-State actors, including businesses.Footnote 43 This can have a positive effect by ‘requir[ing] a State to adopt more effective measures to protect individuals from harm by non-state actors’.Footnote 44 Nonetheless, close inspection of cases in which indirect horizontal effect has been applied suggests that the standards expected of businesses or other non-State actors at the international level are very rarely mentioned explicitly.Footnote 45 Consequently, such case law does not add a huge deal of clarity for businesses.
Soft-law instruments on the corporate responsibility to respect human rights are much more helpful in this regard. This is particularly true of the UN Guiding Principles on Business and Human Rights (UNGPs).Footnote 46 The UNGPs were drafted by the late John Ruggie in his capacity as Special Representative of the Secretary-General on Human Rights and Transnational Corporations and Other Business Enterprises, with input from various stakeholders, including businesses themselves.Footnote 47 Unanimously endorsed by the UN Human Rights Council in 2011,Footnote 48 the UNGPs have paved the way for other significant developments in business and human rights, including the draft binding international treaty on business and human rightsFootnote 49 and the proposed EU mandatory corporate sustainability due diligence legislation.Footnote 50 The UNGPs lay down the main components of the corporate responsibility to respect human rights: (I) the adoption of a policy commitment to respect human rights; (II) a human rights due diligence process to identify, prevent, mitigate and account for how a business addresses its human rights impacts; and (III) processes for the remediation of adverse impacts caused or contributed to by the business.Footnote 51 More detailed principles explain what these various components require businesses to do.Footnote 52
The UNGPs are applicable to all business enterprises, which whilst giving them a broad scope also means that further work is needed to adapt them to specific business contexts—and what the UNGPs specifically require of businesses involved in developing AI is not addressed as such. Although this approach was necessary to ensure the UNGPS were widely applicable, it means that they offer AI businesses limited guidance on how to respect human rights in practice. Given these shortcomings, civil society and businesses have turned to extra-legal initiatives (some of which are discussed in Section III) and have also supported legally binding initiatives such as the European Commission's proposed directive on corporate sustainability due diligence.Footnote 53 A key incentive for doing so is the prospect of greater legal certainty.Footnote 54
AI fits very well into current debates on business and human rights, which draw particular attention to the role of businesses in both causing and mitigating contemporary phenomena posing significant risks to human rights, such as climate change.Footnote 55 For example, significant attention has been paid to the need for transparency in supply chains, both in the literature and in recent and ongoing regional and national legislative initiatives.Footnote 56 Transparency of and within supply chains is a notorious problem for AI. Supply chains can be highly complex, with businesses and individuals from around the world contributing to a single AI system that has the potential to affect a huge number of individuals. The lack of transparency in AI supply chains is exacerbated by the ‘discreteness’ of AI—the ‘mishmash of software and hardware components harvested by many different companies’Footnote 57 in different locations as well as the interaction between these components, coupled with the opacity of some AI systems and the desire of some companies to maintain trade secrets. Whilst the lack of transparency in supply chains may not be specific to AI, the opacity and lack of understanding surrounding AI systems and their development, especially by those who deploy but do not create the systems, causes particular problems.Footnote 58
AI poses a potentially unprecedented threat to a wide range of human rights in countless situations and adds further weight to arguments for greater corporate human rights responsibility and accountability. As discussed below, elements of the European Commission's proposed ‘AI Act’ are reminiscent of human rights due diligence obligations, which represents an implicit acknowledgement of the need for legally binding obligations concerning corporate human rights responsibilities relating to AI.
C. The Abundance of AI Ethics Governance Initiatives
In the absence of clear international human rights law standards pertaining to AI, a plethora of law-and-governance initiatives have been undertaken examining the role and risks of AI within society, how the development and deployment of AI should be regulated to mitigate these risks, and the responsibilities of different actors involved. The result is a somewhat disjointed landscape of overlapping governance activities by a wide range of actors. As seen below, these initiatives may provide a degree of clarity concerning how human rights should be interpreted and applied in relation to AI. However, the UN CteeESCR has warned against a fragmented approach to transnational technologies such as AI, as it risks ‘creat[ing] governance gaps detrimental to the enjoyment of economic, social and cultural rights’.Footnote 59 Whilst it would not be possible (or desirable) to tackle the issue of AI and human rights through one single instrument, the sheer number of governance initiatives that exist could create challenges for businesses and States alike in knowing what standards they should be following in a given situation. For this reason, coordination and cooperation between actors and the initiatives they take should be encouraged, in order to avoid conflicts and unnecessary overlap and to strengthen efficiency of AI governance.
Furthermore, while they may engage with human rights issues, many of these initiatives approach the subject from an ethical perspective. Even though ethics and human rights can be mutually reinforcing, it is imperative that a human rights approach to AI be taken. First, international human rights law comprises internationally agreed standards and obligations, and includes guidance concerning how competing rights and interests should be balanced against one another.Footnote 60 Secondly, although far from perfect, international human rights law provides for, and places an emphasis on, the importance of accountability mechanisms and access to remedies.Footnote 61 Accountability is also a key concern of AI ethicsFootnote 62 and is closely related to the right to an effective remedy.Footnote 63 Thirdly, for all of its flaws, international human rights law does contain soft-law standards on the human rights responsibilities of businesses,Footnote 64 as summarised above. The focus on AI ethics in AI law-and-governance initiatives may draw attention away from international human rights law and may result in confusion in situations where human rights law and ethics contain different standards, or even conflict with one another.
To summarise, there is an array of initiatives that may simultaneously provide crucial insights into the application of human rights to AI whilst also being a potential cause for further uncertainty. The following sections discuss in more detail AI law-and-governance initiatives at the international, regional and national levels and critically assesses their contribution to bringing clarity to human rights standards.
III. INTERNATIONAL AI INITIATIVES
As noted above, there are no legally binding instruments specifically dealing with AI under international human rights law. There are, however, several important initiatives that could have an impact on the protection of human rights and contribute to clarifying applicable standards.
One example is the work of UNI Global Union, which is a global union federation of national and regional trade unions. In 2017, the Union adopted an instrument containing the ‘Top 10 Principles for Ethical AI’.Footnote 65 The document is explicitly ethics-based, but does also engage with human rights, stating in Principle 3, for instance, that ‘AI systems must remain compatible and increase the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as with fundamental human rights’. Workers’ rights are also specifically mentioned in Principle 7, in which the responsibility of businesses as well as States is emphasised in the need to ensure that when workers are replaced by AI, they have the right of access to social security and ‘lifelong continuous learning to remain employable’. The UNI Global Union's principles also, and more explicitly, highlight corporate accountability, particularly when workers are displaced due to the use of AI. They also note the right of individuals to appeal decisions made by AI and to have a human review of such decisions (reflecting close links to the right to an effective remedy) and advocate codes of ethics for the development, application and deployment of AI to ensure compliance with fundamental rights. The codes of ethics are not elaborated upon but could presumably take the form of corporate or industry codes of conduct, either on a voluntary basis or mandated by national law.
An important example of a document that is expressly based on the international human rights law framework and directly addresses the corporate responsibility of AI businesses is the ‘Toronto Declaration on Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems’.Footnote 66 This is a joint initiative by Access Now and Amnesty International. It was adopted in 2018 and is significant because of its direct articulation of the responsibilities of private actors. The Declaration's scope is limited to non-discrimination and equality, but it contains some more general points that are applicable to a broader range of human rights. Whilst the Declaration is not legally binding, it does provide some useful clarity for both States and businesses regarding which human rights standards should be followed in the development and deployment of AI.
The Declaration essentially echoes the more general standards of the UNGPs but also indicates how due diligence should be followed in the context of machine learning. For example, businesses developing AI should submit high-risk systems to third-party auditors,Footnote 67 conduct ongoing quality checks and real-time auditing through design, testing and deployment stages.Footnote 68 The Declaration also emphasises the need for transparency, particularly regarding due diligence processes but also of technical specifications and details of algorithms, ‘including samples of training data and details of the source of data’.Footnote 69 It is not clear from the Declaration to whom this information should be accessible (ie to auditors, end users, or the public at large) and the details of what exactly should be transparent remain somewhat murky. Notwithstanding the relative ambiguity here, the bottom line of the Declaration is clear: businesses should not deploy algorithms with high risks.Footnote 70 If significant risks to human rights come to light during the due diligence process of a business, it should either make adjustments to mitigate the risks, or simply not go ahead with the project.
Other interesting non-binding initiatives have been undertaken by the Organization for Economic Cooperation and Development (OECD). As well as the AI Principles adopted by a Recommendation of the OECD's Council on Artificial Intelligence in 2019,Footnote 71 in September 2021 it published guidance on the application of human rights due diligence (HRDD) to situations involving AI.Footnote 72 The Recommendation notes the relevance of the human rights framework to AI, and in particular the Universal Declaration on Human Rights. Recommendation 1.2 on human-centred values and fairness notes that ‘AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle’, such as privacy and data protection, non-discrimination and equality, and internationally recognised labour rights. Further, Recommendation 1.5 states that ‘AI actors [including businesses] should be accountable for the proper functioning of AI systems and for the respect of the [OECD's] principles, based on their roles, the context, and consistent with the state of art.’ Accountability is a notorious challenge to ethically and human rights-compliant AI, but it is key to its achievement and a recurring theme throughout the international initiatives analysed. While the Recommendation does not indicate how different AI actors might be held to account, taking a similar approach to many other AI ethics initiatives by putting accountability centre stage encourages others to develop more specific standards or recommendations in this respect.
Going a step further, in its guidance on the requirements of ‘HRDD through responsible AI’Footnote 73 the OECD provides an insight into what the corporate responsibility to respect human rights as expressed in the UNGPs requires of businesses developing or deploying AI or involved in an AI supply chain.Footnote 74 Arguably, this initiative provides the most detailed guidance available at the international level regarding the role and responsibility of the private sector. It also provides insights into the standards expected of States on a number of issues, including their obligation to regulate the private sector,Footnote 75 and regarding the requirements of due diligence when State actors feature in an AI supply chain.
Another important example is the work of the UN Educational, Scientific and Cultural Organization (UNESCO).Footnote 76 UNESCO appointed a group of 24 experts to draft a ‘Recommendation on the Ethics of Artificial Intelligence’, to provide ‘an ethical guiding compass and a global normative bedrock allowing to build a strong respect for the rule of law in the digital world’.Footnote 77 After receiving input from various stakeholders on earlier drafts,Footnote 78 the final text of the Recommendation was adopted in November 2021. Although framed as an ethics-based initiative, an objective of UNESCO's Recommendation is ‘to protect, promote and respect human rights and fundamental freedoms, human dignity and equality’.Footnote 79 This forms the basis for four of the core values upon which the Recommendation is based. There are 64 references in the preamble and final text to various UN-level reports highlighting the relationship between human rights and AI. The importance of more specific human rights issues, including the rights of the elderly and the right to science, also feature in the document.
The Recommendation also provides insights concerning the relationship between ethics and human rights. For instance, the preamble recognises that ‘ethical values and principles can help develop and implement rights-based policy measures and legal norms, by providing guidance with a view to the fast pace of technological development’. Interestingly, the Draft Recommendation had suggested that when trade-offs between different ethical principles have to be made, stakeholders should be ‘guided by international human rights law, standards and principles’.Footnote 80 This was not included in the final Recommendation, but the notion that AI ethics and human rights can be mutually reinforcing and that in such situations human rights law should provide guidance is reflected on many occasions throughout the document.
For example, paragraph 48 provides that the ‘main’ policy action is that States should take effective measures to operationalise the values and principles set down in the Recommendation and, crucially, to ensure that other AI actors follow these standards. To that end, AI businesses should conduct due diligence and ethical impact assessments in accordance with the UNGPs.Footnote 81 Unlike the Draft Recommendation, the final text does not expressly say that ethics and human rights are not synonymous.Footnote 82 It nonetheless recognises in the preamble that ‘ethical values and principles can help develop and implement rights-based policy measures and legal norms, by providing guidance with a view to the fast pace of technological development’.Footnote 83 This reiterates the position of UNESCO that ethics and human rights are closely connected. It suggests, in line with the stance taken in this article, that AI ethical standards may be able to compensate, to some extent, for the lack of legal certainty and the shortcomings of international human rights law regarding the risks posed by AI, such as its inability to match the pace of technological development.
Overall, a range of initiatives have been undertaken at the international level which have placed considerable emphasis on a range of human or fundamental rights, even in ethics-based initiatives. Taken together, these initiatives certainly provide greater clarity concerning applicable standards for a range of public and private actors involved in the development or deployment of AI.
IV. REGIONAL AI INITIATIVES
At the regional level, many law-and-governance initiatives have been taken that could have an impact on legal certainty and the protection of human rights in relation to AI. This section will consider a number of European initiatives, as this is the region that has been most active in the governance of AI.
In terms of legally binding instruments, the European Union General Data Protection Regulation (GDPR) is perhaps the most obvious example.Footnote 84 Like many initiatives targeting privacy and data protection, the GDPR is not specific to AI, but applies more generally to data processing activities. The overall aim of the GDPR is to protect individuals’ personal data.Footnote 85 The GDPR lays down due diligence standards for companies involved in processing data and in the development of AI. It requires ‘data controllers’Footnote 86 to monitor respect for the rights of individuals ‘to informed consent and freedom of choice when submitting data, as well as their right to access, amend and verify data’.Footnote 87 As regards AI specifically, Article 22 of the GDPR prohibits some forms of automated decision-making. Whilst some commentators have been disappointed with its practical impact,Footnote 88 the GDPR is a significant development in relation to AI since it adds a degree of legal certainty and it clearly places standards on some businesses developing AI, which can be fined for non-compliance.Footnote 89
The EU has also developed the well-known ‘Ethics Guidelines for Trustworthy AI’, adopted by the High-Level Expert Group on Artificial Intelligence established by the European Commission.Footnote 90 The Guidelines set out seven requirements for trustworthy AI, based on four ethical principles. The Guidelines use the language and framework of ethics throughout but consider themselves to be based on fundamental rights as reflected in international human rights law. The Guidelines even go so far as to say that ‘[r]espect for fundamental rights … provides the most promising foundations for identifying abstract ethical principles and values, which can be operationalised in the context of AI’.Footnote 91 The Guidelines go on to say that respect for human dignity is the common foundation for human rights, which itself reflects the Guidelines’ ‘human-centric approach’ to AI.Footnote 92 As a result, many of the points made in the Guidelines align with human rights standards, even when this is not expressly stated. This is certainly true of the Guidelines’ provisions concerning privacy and bias, the latter of which is closely connected to the right to non-discrimination.
The European Commission and European Parliament have adopted numerous AI initiatives, including the Commission's ‘White Paper on Artificial Intelligence’Footnote 93 and a series of Parliament Resolutions and recommendations to the Commission relating to AI on topics including: ethics;Footnote 94 liability;Footnote 95 copyright;Footnote 96 criminal matters;Footnote 97 education, culture and the audiovisual sector.Footnote 98 Given the differing scope of these various initiatives, it is unsurprising that they engage with human rights to differing degrees. For instance, the Resolution on a framework of ethics pays significantly more attention to a broader range of fundamental rights than does the resolution on copyright, but the copyright Resolution has a particular focus on data protection and privacy. The Resolution on a Framework of AI Ethics expressly engages with a broad range of rights and the need for an EU regulatory framework on AI to be based on international human rights law.
In April 2021 the European Commission published the draft ‘Artificial Intelligence Act’,Footnote 99 which sets out a proposed legal framework for AI. Although an EU instrument, the draft Regulation could have a considerably broader impact geographically.Footnote 100 The Artificial Intelligence Act builds on ethics-based initiatives such as the Ethics Guidelines for Trustworthy AI and the Resolution on a Framework of AI Ethics. The initial draft has been met with mixed reactions.Footnote 101 It places a significant, although arguably still insufficient, emphasis on fundamental rights.Footnote 102 This is reflected in two of the proposal's four specific objectives, the first of which is to ‘ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values’.Footnote 103 The proposal contains a list of relevant rights found in the EU Charter of Fundamental Rights that must be protected in relation to AI,Footnote 104 building on those rights pinpointed in the Commission's White Paper and the European Parliaments’ Resolutions. Workers’ rights, freedom of expression, the rights to an effective remedy and to a fair trial are all listed, as well as the right to a high level of environmental protection.Footnote 105 Significantly, the proposal only places ‘regulatory burdens’ on AI systems that are ‘likely to pose high risks to fundamental rights and safety’ (the so-called ‘high risk’ systemsFootnote 106). Systems unlikely to pose such risks are subject to much more limited transparency requirements and businesses developing these systems are encouraged to adopt codes of conductFootnote 107 rather than being required to conduct the compliance assessments which are obligatory for high-risk systems. Leaving aside the draft's current flaws for the moment, the proposed regulation does provide relatively detailed standards for both States and businesses working with AI. Although not framed as a ‘human rights’ initiative, the draft arguably goes some way to addressing the legal uncertainty regarding the application of international and regional human rights law to AI for both States and AI businesses. It is to be hoped that the final version of the regulation will go even further in this regard.
Within the Council of Europe, the Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data is also noteworthy.Footnote 108 The Protocol, which is not yet in force, aims to modernise the ConventionFootnote 109 and address emerging challenges to the effective protection of privacy brought about by new technologies.Footnote 110 The instrument is not an ‘AI’ initiative per se but, similar to the GDPR, it would have an impact on some aspects of the development and deployment of AI.
The Protocol expressly refers to the need to ensure protection of human rights and fundamental freedoms and to balance rights against one another (ie privacy and freedom of expression). Specifically, Article 5(1) of the consolidated treaty (‘Convention 108+’) contains a requirement that data processing be ‘proportionate in relation to the legitimate purpose pursued and reflect at all stages of the processing a fair balance between all interests concerned, whether public or private, and the rights and freedoms at stake’. This essentially reflects the balancing act required in the application of the legitimate limitations to human rights found in some provisions of the European Convention of Human Rights.Footnote 111
The Protocol takes the approach typical of the Council of Europe in placing positive obligations on State Parties which include the obligation to ensure the protection of individuals from violations by the private sector. This includes, for instance, the obligation in Article 10(2) that State Parties ‘provide that controllers and, where applicable, processors, examine the likely impact of intended data processing on the rights and fundamental freedoms of data subjects prior to the commencement of such processing’,Footnote 112 which is similar to a duty of human rights due diligence or of impact assessment, that should be imposed on (private) data controllers by the State. Despite this, the Explanatory Report to the ConventionFootnote 113 seems to suggest that the obligation may be less formal than the due diligence processes expected under other instruments (ie the UNGPs) and this could result in there being different, although not necessarily conflicting, standards. In any case, the Convention certainly provides clarity and confirmation regarding the applicability of certain human rights standards in the context of AI.
The Recommendation of the Committee of Ministers on the human rights impacts of algorithmic systems is also significant, particularly in how it addresses the responsibilities of AI businesses.Footnote 114 This document, clearly a human rights-based initiative, contains a recommendation that the governments of Council of Europe Member States:
ensure, through appropriate legislative, regulatory and supervisory frameworks related to algorithmic systems, that private sector actors engaged in the design, development and ongoing deployment of such systems comply with the applicable laws and fulfil their responsibilities to respect human rights in line with the [UNGPs] and relevant regional and international standards.Footnote 115
This is the most explicit reference to the human rights responsibilities of private businesses in any of the regional initiatives analysed and brings the Recommendation into line with the Toronto Declaration at the international level.
The approach here is again one based on the positive obligations of States, and like the Protocol discussed above, would require States to place a duty of due diligence on the businesses mentioned.Footnote 116 Indeed, the Appendix to the Recommendation explains that pursuant to ‘the horizontal effect of human rights’Footnote 117 and the central role of private sector actors in all stages of an AI system's life cycle, including in collaboration with the public sector, ‘some of the key provisions that are outlined [in the Recommendation] as obligations of States translate into legal and regulatory requirements at national level and into corporate responsibilities for private sector actors’—in other words, it has indirect horizontal effect, as explained in Section II.B above.Footnote 118 However, the Appendix to the Recommendation emphasises that businesses should comply with this responsibility regardless of whether or not States are able and willing to fulfil their own human rights obligations. Ultimately, a very clear human rights-based approach is taken by the Protocol, with due consideration given to the role of different actors in the protection of human rights. Having said that, the standards themselves are relatively vague and, unlike the Toronto Declaration, do not clarify in detail the standards expected of businesses. Rather, and similarly to Convention 108+, the Recommendation reconfirms the applicability of general standards in relation to AI.
Very little action has been taken by regional organisations outside of Europe, although the African Union has ‘called for a structured regulation of AI to manage the benefits of the technology for Africans, and to foresee and curb the risks’.Footnote 119 In Asia, no instruments have been adopted by the Association of South East Asian Nations (ASEAN) but various national initiatives have been adopted within this region (see Section VI below). The same can be said regarding the Inter-American human rights system. Looking beyond international organisations, the McKinsey Global Institute has reviewed the current state of play concerning AI in ASEAN States and has made several recommendations to both States and businesses developing or deploying AI, albeit with extremely scant reference to either ethics or human rights. There is no direct discussion of ‘rights’ as such and only one mention of ethics, made when flagging the complex ethical questions that arise from AI.Footnote 120 Its most significant comments relate to rights-based or ethical standards concerning privacy, stressing the need for governments to comply with ‘privacy norms and laws’ and ‘to grapple with defining principles of privacy as new uses are generated by AI’.Footnote 121 Again, this does not in itself help the applicable standards, although taken together, the regional initiatives could be said to clarify more generally which human rights standards are applicable, and to whom, in the context of AI.
V. NATIONAL AI INITIATIVES
A number of countries have now adopted national strategies concerning AI, and some of these have adopted legislation. However, the inclusion of human rights in such strategies and legislation is another question. In some countries, such as Singapore, AI governance frameworks have been proposed which are entirely based on ethics and only refer to human rights in a passing manner. Singapore's Personal Data Protection Commission adopted a revised ‘Proposed Model AI Governance Framework’ in 2020, with the purpose of converting ethical principles into ‘implementable practices’.Footnote 122 Whilst being listed in Annex A as a ‘foundational ethical principle’, respect for international human rights in the design, development and implementation of AI is not mentioned at all in the main text. The importance of AI solutions being ‘human-centric’ is said to be a ‘high-level guiding principle’, and that the ‘well-being and safety [of human beings] should be primary considerations in the design, development and deployment of AI’.Footnote 123 It contains suggestions as to how this can be achieved, many of which would contribute to human rights protection. For example, it proposed a ‘probability-severity of harm matrix’ to help entities wanting to deploy an AI model in decision-making processes to determine the level to which humans should be involved in order to mitigate the level of potential harm to an individual caused by reliance on the model.Footnote 124 The document also emphasises the importance of establishing clear roles and responsibilities for different actors in the ethical deployment of AI, although it does not provide specific guidance.
In Denmark, the Danish Expert Group on Data Ethics adopted recommendations on ‘Data for the Benefit of the People’.Footnote 125 These recommendations are intended to foster the responsible use of data in the business sector and are restricted to the context of data processing. Reference is made to equality and non-discrimination with regard to bias in AI, and to human dignity, which is said to outweigh profit and ‘must be respected in all data processes’.Footnote 126 The recommendations suggest ways of ensuring that these ‘values’, which are also found in international human rights law,Footnote 127 form the foundation of data driven systems and of future policy and legislation regarding data processing. This includes, among others, introducing an Independent Council for Data Ethics, a ‘data ethics oath’ similar to a doctor's Hippocratic oath, and mandatory declarations of data ethics in annual financial statements for large companies. Each of these measures can contribute to the protection of non-discrimination and dignity.
In 2020 and based in part on these recommendations, Denmark amended its national legislation to add a requirement that from January 2021 onwards, large and listed companies and State-owned public limited companies had to provide information in their annual reports on their data ethics policies or publish them on their website, with an explanation as to why no policy had been implemented if that was the case (known as the ‘comply or explain’ principle).Footnote 128 The information to be provided could include, inter alia, how and why the company uses and chooses to implement new technologies, including AI, and how algorithms used by the company are trained, as well as safeguards that are put in place to mitigate bias.Footnote 129 In terms of concrete standards and contributing to legal clarity for businesses and States alike, these measures are quite robust. There is also a focus on transparency, which as noted above can enhance accountability and access to remedies and can also have a ‘domino effect’ on the protection of other rights, such as privacy and non-discrimination.
The German Data Ethics Commission has also adopted an Opinion which expressly supported the ‘European path’, by which:
the defining feature of European technologies should be their consistent alignment with European values and fundamental rights, in particular those enshrined in the European Union's Charter of Fundamental Rights and the Council of Europe's Convention for the Protection of Human Rights and Fundamental Freedoms.Footnote 130
A rights-based approach is certainly evident in the Opinion, which is based on an understanding that, whilst important, ethics cannot replace regulation, particularly regarding issues such as AI where ‘heightened implications for fundamental rights’Footnote 131 require key decisions to be made by democratically elected representatives. It also makes recommendations for both governments and businesses regarding AI, suggesting, among other measures, the consideration of ‘enhanced obligations of private enterprises to grant access to data for public interest and public-sector purposes’.Footnote 132 The introduction of a binding ‘Algorithmic Accountability Code’ for operators of algorithmic systems, ‘inspired by the “comply or explain” regulatory model’Footnote 133 as seen in the new Danish legislation, is also suggested. A legally binding code of conduct for businesses accompanied by oversight by an independent body could have a significant impact on the protection of human rights, should it include measures such as the adoption of human rights due diligence processes.
In a similar vein, the German Ethics Council adopted an ‘Opinion on Big Data and Health – Data Sovereignty as the Shaping of Informational Freedom’.Footnote 134 The Council emphasises the need for businesses to take responsibility, suggesting this could be achieved through ‘strengthening the oversight and verifiability of their processes in terms of, for example, the algorithms employed; the measures taken to eliminate systematic discrimination; the adherence to regulations pertaining to data safekeeping, anonymisation and deletion; and the gapless and tamperproof of the origin, processing, use and exchange of data’.Footnote 135 Again, each of these measures would go some way to protecting the rights to privacy and non-discrimination.
A different approach is taken by the ‘Norwegian National Strategy for Artificial Intelligence’, adopted by the Norwegian Ministry of Local Government and Modernisation.Footnote 136 Rather than recommending binding standards applicable to businesses, businesses are encouraged to ‘establish their own industry standards or labelling or certification schemes based on the principles for responsible use of artificial intelligence’.Footnote 137 As it is a strategy document, it contains fewer specific recommendations for businesses and government and is limited to laying down some of the steps Norway will take in the regulation and governance of AI. Nonetheless, there is a recurring commitment to the protection of human rights throughout the document, which also states that Norway will adopt the ethical principles put forward in the EU Ethics Guidelines for Trustworthy AI in its governance of AI and highlights the Guidelines’ basis in fundamental rights.Footnote 138
As of 2020, 128 countries had adopted legislation on data protection more broadly,Footnote 139 although as seen above, this can also require certain conduct of entities working with AI. The materials examined suggest that, apart from privacy, there are relatively few direct references to the protection of human rights in national legislation and official regulations related to AI. However, some instruments include more general references to the protection of human rights, such as in Australia, New Zealand and GermanyFootnote 140 which also contain standards that can have an impact on the protection of human rights without being framed as such. Other legislative initiatives have been taken at the sub-national level, such as legislation adopted in Washington State in the US regarding governmental use of facial recognition and a bill concerning discrimination and the use of automated decision-making.Footnote 141
Overall, many countries are making strides in the introduction of legislation or regulation concerning AI, including through the adoption of national AI strategies, and non-binding national measures sometimes reference the broad range of human rights found at the international level. This is positive, but beyond data protection and privacy, the protection of human rights has not yet been thoroughly embedded in national legislation related to AI. Nonetheless, there are some positive contributions that enhance legal certainty for both States and businesses in the national initiatives analysed.
VI. ANALYSIS AND COMPARATIVE OBSERVATIONS
A very wide range of initiatives have been taken that set out, even if relatively vaguely, the human rights standards applicable to those involved in developing and deploying AI. Many of the initiatives examined could, if implemented, contribute to the protection of human rights in situations involving AI. While promising, three main issues remain to be adequately addressed in order to enhance human rights protection: (i) consideration of human rights other than those directly related to privacy and data protection, in particular economic, social and cultural rights, which are significantly affected by AI; (ii) more detailed standards regarding the human rights responsibilities of businesses involved in developing AI; and (iii) improved coordination to avoid the adoption of contradictory standards which can undermine legal and regulatory certainty and clarity.
As has been seen, some approach AI from a human rights perspective (eg the Toronto Declaration and the work of the Council of Europe), some from an ethics perspective, but which may have an impact on human rights protection (eg the EU Ethics Guidelines for Trustworthy AI), whilst some take neither as their starting point but may reference both (eg the UNESCO Recommendation). Some approaches are not based only or directly on AI, but nevertheless have an impact upon it (eg the GDPR and Convention 108+). In addition, some are addressed only to States or State actors, others are addressed to all relevant actors, and a small number specifically address businesses (eg the Toronto Declaration and the OECD's Business and Finance Outlook 2021).
While a range of human rights are covered by these various initiatives, legally binding instruments almost exclusively focus on privacy and data protection (with the key exception of the proposed AI Act). This can be seen within the European Union (eg the GDPR) and the Council of Europe (Convention 108+), at the national level in the US and India, among other countries, and at the subnational, state level in the US (eg the Washington State Legislature Bill). Non-discrimination is also covered, very often indirectly through the concept of bias documents relating to AI ethics, but sometimes as a human right, particularly within Europe. Labour rights are also occasionally mentioned though at a relatively superficial level and by way of passing references. There is a need for further clarification of the applicable standards relating to the many other rights potentially negatively affected by reliance on AI.
While many initiatives do in fact discuss human rights, the number that really engage with them and suggest concrete human rights standards for different AI actors, and particularly non-State actors, is significantly lower. Indeed, many remain decidedly vague. This includes those simply stating that human rights must be respected or that the principles proposed are generally based on human rights (eg the EU Ethics Guidelines for Trustworthy AI and the Proposed Model AI Governance Framework in Singapore). The same is true of those that engage with human rights more explicitly but fail to set out specific standards (eg the Norwegian National Strategy for Artificial Intelligence). There are, however, important exceptions. The Toronto Declaration, based on the much more general UNGPs, sets out a number of specific responsibilities of both States and businesses with regard to discrimination in machine learning, and the OECD's guidance regarding HRDD in AI supply chains provides concrete recommendations for various (private) AI actors. Additionally, the Council of Europe's Recommendations lay down several standards for businesses to follow, despite the apparently State-centric wording of the Recommendations themselves.
Interestingly, some ethics-based documents overlap considerably with approaches found within international human rights law and sometimes provide greater detail concerning how to achieve, for instance, transparency and accountability in relation to AI than is found in human rights law. To that extent, international human rights law may be able to lean on AI ethical standards in order to achieve its goals. Examples include calls for the training of and ensuring the inclusivity and diversity of staff, the use of explainable AI models, auditing, the assignment of responsibility to specific actors, the adoption of due diligence processes, mechanisms for seeking remedies, etc. These are all called for by statements concerning ethics and AI (eg the Recommendations of the Danish Expert Group on Data Ethics and the EU Ethics Guidelines for Trustworthy AI), but are equally relevant in the context of human rights protection.
Overall, it appears that, with a few exceptions, developments at the international and regional levels are more consistent in approaching human rights in a direct manner, whereas national approaches tend to focus more on AI ethics. However, it seems that even when ethics provides the favoured framework such initiatives are not blind to the need to tackle issues from a human rights perspective and appear to follow the premise that human rights and ethics can be mutually reinforcing (eg the UNESCO Recommendation).
Non-binding initiatives at the international level and within the Council of Europe seem more likely to address human rights directly, and particularly the responsibilities of AI businesses. As extra-legal initiatives, they are not bound by the State-centric legal framework of international law, which facilitates a more direct discussion of corporate responsibility and how non-binding instruments such as the UNGPs apply to AI. In addition, at the international level human rights tend to be articulated as standards applicable to States. At the national level, legally binding measures are more focused on outcomes than on whether the standards advanced have their basis in human rights or ethics. The ethical focus of many regional and national initiatives also reflects the typical approach of AI practitioners, who tend to think in terms of ethics rather than human rights. At the regional level, ethics-based instruments provide more detail and hence greater clarity concerning what is expected of the various actors than human rights-based instruments, and a cross-sectoral analysis is sometimes needed to see how the standards they set out relate to human rights.
In terms of content, many national and regional initiatives focus on the development of standards concerning privacy and data protection, as well as non-discrimination. Efforts must be made to provide specific standards covering the much broader range of human rights that can be negatively impacted by AI, as acknowledged in the proposed AI Act and in numerous international initiatives.
More needs to be done to clarify the extent of corporate responsibility in relation to human rights in the context of AI. A ‘one size fits-all’ approach is certainly neither desirable nor possible given the vast range of AI businesses and the AI models that they produce. However, more legal certainty and clear advice for businesses developing AI is crucial to ensuring the effective protection of human rights in this context. The majority of initiatives at all levels pay attention to what businesses can and should be doing to ensure ethical or human rights-friendly AI with some, such as the Toronto Declaration and the UNESCO Recommendation, going as far as citing expressly the UNGPs. However, many have only limited provisions for the supervision and enforcement of standards. This has significant consequences for accountability, which as noted above is key to securing human rights protection in this, and indeed any other, setting.
Finally, it is important from a (good) governance perspective to remember that a considerable amount of duplication, overlap and potential contradiction remains and there is a need for better coordination in order to improve legal and regulatory certainty for those involved in developing and deploying AI. Nonetheless, a key lesson to be learned from this analysis is that in order to fully protect human rights in the era of AI (especially in terms of what can and should be done by different actors involved) it is necessary to consider standards that are found not only in legal and human rights-specific instruments, but in a wide range of initiatives that have a bearing on human rights issues, irrespective of whether they are labelled as such.
VII. CONCLUSION
A true law-and-governance approach is being taken globally to try to reap the benefits of AI whilst curbing its negative impacts on individuals and society. Currently, the number of non-binding governance initiatives related to AI and human rights greatly outweighs the number of legally binding initiatives, particularly at the international level. The focus on the rights to privacy and data protection in the legally binding initiatives is understandable but must be extended to other rights which are not sufficiently addressed, including the rights to food, education, health and a healthy environment. The wide range of actors that have stepped up to take action in this area has led to varied, yet often complementary sets of standards and recommendations at the national, regional and international levels with differing impacts on the protection of human rights.
Overall, whilst there is a clear articulation of the applicability of existing human rights standards to both businesses and States developing and deploying AI in many initiatives, few provide real clarity concerning what is to be expected of them. Nevertheless, an examination of these various initiatives can help remedy, to some extent, the weakness of the international human rights law framework in relation to AI: its limited capacity to keep up with the pace of AI development, the limited opportunities for authoritative pronouncements concerning the implications of human rights law for AI; the State-centric approach of international human rights law; and the difficulty of ‘future-proofing’ international human rights law given the uncertainties of how some AI systems may develop.
This analysis shows that initiatives that are not restricted by legal frameworks and which are more practically focused are more helpful for AI practitioners in allowing them to see exactly what such standards mean for them. Such initiatives are also better able to tackle issues head-on. It is crucial to make use of the broader and technical expertise that is reflected in these initiatives to supplement the standards and approaches developed under international human rights law.