Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-25T16:30:32.146Z Has data issue: false hasContentIssue false

Responsible artificial intelligence in Africa: towards policy learning

Published online by Cambridge University Press:  09 December 2024

Paul Plantinga*
Affiliation:
Human Sciences Research Council, South Africa
Kristophina Shilongo
Affiliation:
Mozilla Foundation, Namibia
Oarabile Mudongo
Affiliation:
Consumers International, Botswana
Angelique Umubyeyi
Affiliation:
Independent, South Africa and
Michael Gastrow
Affiliation:
Human Sciences Research Council, South Africa
Gabriella Razzano
Affiliation:
OpenUp, South Africa
*
Corresponding author: Paul Plantinga; Email: pplantinga@hsrc.ac.za

Abstract

Several African countries are developing artificial intelligence (AI) strategies and ethics frameworks with the goal of accelerating responsible AI development and adoption. However, many of these governance actions are emerging without consideration for their suitability to local contexts, including whether the proposed policies are feasible to implement and what their impact may be on regulatory outcomes. In response, we suggest that there is a need for more explicit policy learning, by looking at existing governance capabilities and experiences related to algorithms, automation, data, and digital technology in other countries and in adjacent sectors. From such learning, it will be possible to identify where existing capabilities may be adapted or strengthened to address current AI-related opportunities and risks. This paper explores the potential for learning by analysing existing policy and legislation in twelve African countries across three main areas: strategy and multi-stakeholder engagement, human dignity and autonomy, and sector-specific governance. The findings point to a variety of existing capabilities that could be relevant to responsible AI; from existing model management procedures used in banking and air quality assessment to efforts aimed at enhancing public sector skills and transparency around public–private partnerships, and the way in which existing electronic transactions legislation addresses accountability and human oversight. All of these point to the benefit of wider engagement on how existing governance mechanisms are working, and on where AI-specific adjustments or new instruments may be needed.

Type
Data for Policy Proceedings Paper
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (http://creativecommons.org/licenses/by-nc/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Policy Significance Statement

For public officials and non-governmental organisations working on AI, this paper recommends exploring existing policy and legislation related to algorithms, automation, data, and digital technology. There are opportunities to draw from previous experiences with multi-stakeholder forums, accountability and human oversight, and model management, amongst others, and to use existing regulatory capabilities for realising responsible AI outcomes.

1. Introduction

There has been significant progress with digital governance in African countries over the past decade; from the passing of personal data protection and cybercrime legislation to the development of national data strategies (Ndubueze, Reference Ndubueze, Holt and Bossler2020; Daigle Reference Daigle2021). More recently, a range of frontier technologies have garnered special attention, usually as part of a wider “fourth industrial revolution” (4IR) agenda (Ayentimi and Burgess, Reference Ayentimi and Burgess2019). Distributed ledger technology, Internet of Things, and unmanned aerial vehicles are a few that have been especially prominent, and in some cases addressed through dedicated strategies.Footnote 1 However, it is artificial intelligence (AI) that has come to dominate much of this emerging technology narrative, mainly because it, along with data, is seen to be a “general purpose technology” that is embedded in or transforming other digital domains and society broadly.Footnote 2

As a result, African states are developing national AI strategies and establishing AI advisory bodies as a vanguard for driving technology development and use. Whilst much of the emphasis on positive benefits arising from AI use, there is also a concern about potential negative effects and charting a more “responsible” approach to AI adoption (Gwagwa, Reference Gwagwa, Kachidza, Siminyu and Smith2021; Shilongo et al., Reference Shilongo, Gaffley, Plantinga, Adams, Olorunju, Mudongo, Schroeder, Chanthalangsy, Khodeli and Xu2022). The notion of “responsible AI” has emerged as a widely used heuristic for ethical AI governance. The term refers to the ethical development, deployment, and use of AI systems that prioritise fairness, accountability, transparency, and societal benefit. The concept underpins a variety of international frameworks for AI governance including the Organisation for Economic Co-operation and Development (OECD) AI PrinciplesFootnote 3, the European Union (EU) Ethics Guidelines for Trustworthy AIFootnote 4, and the Institute of Electrical and Electronics Engineers (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems.Footnote 5

Under the rubric of responsible AI, there is a sense that the wide social impact associated with AI adoption requires an elevated level of policy and legislative action to accelerate progress and coordinate a governance response across many sectors. However, these approaches towards AI also carry the risk of “exceptionalism” (McQuillan, Reference McQuillan2015; Calo, Reference Calo2015) around AI governance that overlooks existing institutional and policy roles and capabilities and does not fully consider local digital and data histories. Perceptions that AI possesses unique qualities or capabilities that set it apart from other forms of technology can lead to exaggerated expectations or fears about its potential impact on society, which in turn has the potential to drive policy processes that are disconnected from their broader environment. This is likely to result in various unintended consequences. Amongst others, we may see resistance to proposed laws, a duplication of governance roles and resource requirements, and increased complexity of decision-making and regulation across the digital governance ecosystem. All of this is particularly concerning for African countries’ technology governance as it can reinforce elite dominance of policy activity and an even greater public official dependence on consultants, potentially stifling beneficial innovation and increasing the risk of harm (Guihot et al., Reference Guihot, Matthew and Suzor2017; Larsson and Heintz, Reference Larsson and Heintz2020; Smuha, Reference Smuha2021; Plantinga, Reference Plantinga2024). Moreover, the current approach disregards existing policy and regulatory capabilities that already address the use of data, algorithms, automation, and information and communication technology (ICT) and that offer opportunities for policy learning and adaptation to emerging needs.

Building a responsible AI ecosystem requires a broad range of capabilities, amongst which interdisciplinary collaboration is particularly important. For responsible AI policy to be developed, policymakers require domain-specific governance capabilities with respect to legislation and regulation. However, the challenges of AI also require cross-cutting technical expertise; for example, a sufficient understanding of machine learning (ML) and algorithms. This needs to be matched with capabilities for engaging with social and ethical issues, drawing on insights emerging from the social sciences regarding the implications of AI for culture, behaviour, employment, education, and human interactions, and debates about ethical issues such as fairness, transparency, accountability, privacy, and autonomy (Filgueiras et al., Reference Filgueiras, Mendonça and Almeida2023).

The aim of this paper is to develop a richer picture of where these capabilities exist and to consider pathways for facilitating policy learning around a responsible way forward for AI in Africa. It is critical that policy actors acquire knowledge and insights from past experiences, data, experimentation, and adjacent fields to inform the development and refinement of AI governance and regulation. Such policy learning is particularly important in the African context, as policy stakeholders need to address multidimensional societal challenges whilst navigating distinctive institutional environments and political aspirations. A widely promoted objective on the African continent is to “leapfrog” technology adoption, and this is often justified by examples of mobile telephony and fintech where the conventional infrastructure and services of fixed-line telephones and branch-based banking were “skipped” over (Swartz et al., Reference Swartz, Scheepers, Lindgreen, Yousafzai and Matthee2023). At the same time, there continue to be challenges building sufficient state capacity to effectively support and regulate emerging industries and to realise leapfrogging or other social and economic goals, with a significant dependence on international donor funding and consultants (Heeks, Reference Heeks2002). A contextually appropriate approach to AI governance requires deeper reflection on technology possibilities and how available capacities can be strengthened or prioritised in pursuit of certain technical objectives. But there is also a more fundamental need to consider what values and ethics constitute our societies and to allow these to open up alternative pathways to AI governance and adoption (Mhlambi and Tiribelli, Reference Mhlambi and Tiribelli2023).

To map existing capabilities, and to identify potential areas for policy learning, we first review existing literature to consider whether and how a more “moderate” (Calo, Reference Calo2015) or contextually anchored form of exceptionalism (if any) may be applied to AI governance on the African continent. Then, we use the UNESCO Recommendation on the Ethics of AI (“UNESCO Recommendation”) (UNESCO, 2021a) as a relatively holistic framework to identify domains and subdomains relevant to AI governance. Finally, we apply (and add to) this framework through a study of emerging and established AI, algorithm, automation, data, and ICT governance examples from twelve African countries and consider how they could be relevant to AI governance going forward.

2. Policy learning for AI governance

In seeking a more granular view of AI-relevant governance capabilities, there is a need to better understand what AI, algorithms, and automation are, who is involved in policy and legislation activities, and how governance takes place. Unfortunately, science fiction depictions of AI continue to reinforce its exceptional character and lead to vague narratives about its operation and (future) impact on society. When the scope of a definition is too broad, the potential for developing impactful policy instruments or enforcing regulation is low. As Buiten suggests, “the narrative of AI as an inscrutable concept may reinforce the idea that AI is an uncontrollable force shaping our societies” (Reference Buiten2019: 46). Even our attempts to support a more responsible approach to AI can reinforce this thinking. For example, in seeking to develop regulations that can “control” an “intelligent” technology’s “behaviour” or ensure it is “trustworthy,” we risk associating it with human characteristics and creating the impression of autonomy that is not there. Similar considerations apply to the use of metaphors, a key tool for policy actors involved in cyberlaw and related fields (Calo, Reference Calo2015; Sawhney and Jayakar, Reference Sawhney, Jayakar, Cherry, Wildman and Hammond1999). It is therefore important to develop a more specific language and appropriate metaphors around how AI is developed and used.

2.1 Defining AI and implementation principles

We can start by looking at how AI is defined in current policy narratives. Whilst the UNESCO Recommendation points to an AI system’s “capacity to learn” as a core capability, it does identify two broad sets of methods; more human-configured, “symbolic” machine reasoning and more data-driven, “neural” ML (UNESCO, 2021a:10; Sarker et al., Reference Sarker2021). Similarly, the OECD suggests that AI models may be built using data, with automatic updates as new data are received; and/ or human knowledge, which tends to be more static in the form of fixed rules that determine automated responses or actions (OECD, 2019a; OECD, 2022). Both the OECD and UNESCO policy statements therefore include a mix of both data-driven and symbolic forms of AI separately or in a hybrid form and are centred on the more or less automatic operation of a computer system. For the regulation of more symbolic forms of AI, the focus is likely to be on how humans configure fixed rules and whether the system operates as expected. For more neural approaches to AI, much of the regulatory attention will be on the type of data that is used for training models, how the training is performed, the way new data are used to update the model, and the accuracy (and fairness) of predictions or automated actions (Mehrabi et al., Reference Mehrabi, Morstatter, Saxena, Lerman and Galstyan2021). In the second case, our understanding of “automatic” is potentially much broader, applying to both the operation of the model and also how the model is continually updated.

Defining “responsible” AI requires us to look beyond the technology at the wider network of people and infrastructure in which the technology is deployed, and at the values or principles informing its development and use (Raso, Reference Raso, Tomlinson, Thomas, Hertogh and Kirkham2021). This is particularly important in African contexts where the gap in values or culture has been acknowledged as a key reason for the failure of many information technology projects on the continent (Heeks, Reference Heeks2005). Transparency, accountability, and fairness are examples of ethical principles that are now quite widely promoted and adopted (OECD, 2021). However, AI principles (and their definitions) can vary across contexts. One of these principles is autonomy. Reflecting on the implications of AI for autonomy, Mhlambi and Tiribelli (Reference Mhlambi and Tiribelli2023) note that this principle is usually conceived of in an individual, rational sense. However, they argue, in many African contexts autonomy is relational, grounded in the idea that an “individual cannot exercise autonomy for an individual is connected to and is a person through others” (Mhlambi and Tiribelli, Reference Mhlambi and Tiribelli2023: 875). Reflecting this perspective, the African Charter on Human and Peoples’ Rights (ACHPR) is distinctive amongst international treaties by including “third generation” collective or solidarity rights, in addition to individual rights. It therefore provides a potential basis for a more relational form of AI governance. However, whilst much has been made of grounding AI adoption in relational African values, there is a need for concrete engagement with what those values look like and how individual and collective rights are balanced with respect to AI (Metz, Reference Metz and Okyere-Manu2021).

2.2. Governance stakeholders and structures

As noted earlier, the general-purpose nature of AI and its expected impact on society has been used to justify a more centralised approach to its governance. However, AI policy and regulation have both horizontal and vertical dimensions. Horizontal dimensions apply to AI in general, regardless of the sector or domain of application. Typical horizontal questions include those of data privacy, bias, intellectual property rights, infrastructure, skills and capabilities, and societal risk. At the same time, the application and regulation of AI is characterised by major differences across sectors. In the OECD’s 2021 update on AI policy, almost all of the evaluated countries have focused on health care but a few others are prominent, including transportation, agriculture, energy, and public administration (OECD, 2021: 42). These are similar to the sector priorities identified by African public officials in a recent survey (Sibal and Neupane, Reference Sibal and Neupane2021) and provides an indication of who the key stakeholders in AI governance may have been so far.

In seeking to frame AI governance through an African lens, Wairegi et al. (Reference Wairegi, Omino and Rutenberg2021), suggest that stakeholders may be understood through the different normative claims they make with respect to the technology’s implementation. These claims are centred on three basic stakes: economic opportunity, political equality, and authenticity. Much of the early governance space on AI was occupied by private sector entities prioritising economic opportunity for the firm, whilst governments have been looking to assert leadership and shifting the focus towards political equality. As a result, AI governance has taken a hybrid form with governments sharing responsibility with academia and the private sector (Radu, Reference Radu2021). Hybrid models are seen as valuable to governments because much of the knowledge about the technology is concentrated in private sectors or academic organisations (Taeihagh, Reference Taeihagh2021).

In reality, the drive for responsible AI governance has enrolled a diverse spectrum of expertise, spanning legal, technology, business, and social domains (Filgueiras et al., Reference Filgueiras, Mendonça and Almeida2023). Clearly, technology and operational expertise are pivotal for explaining how systems operate and identifying opportunities and risks that need technology, legal, or social responses. Social and governance expertise is essential for understanding the societal impacts and ethical implications of AI technologies, and in fostering inclusive dialogue to address concerns related to equity, bias, and fairness. Going further, there have been attempts to facilitate bottom-up engagement in policy formulation. Here, the wider “public” may be involved in a “social dialogue” about what should form part of a national AI strategy (OECD, 2021: 20). This dovetails with UNESCO’s support for more bottom-up approaches to AI policy and oversight in Africa, which encourages broad public involvement to ensure that AI adoption is “rooted” in local reality (UNESCO, 2022). Ultimately, this underscores the interdisciplinary nature of AI governance and emphasises the importance of fostering collaboration across diverse fields of knowledge and practice.

In general, however, the technical nature of AI, the pace of policy activity, and the breadth of expected policy impact mean that only a very small proportion of potentially affected stakeholders form part of policy deliberations (Radu, Reference Radu2021). In addition, there is a significant asymmetry in access to information amongst the different actors, such as between technology companies and local regulators (Taeihagh, Reference Taeihagh2021). In AI, this asymmetry is exacerbated by the opaque character of the technology (Rudin, Reference Rudin2019). As a result, many African states are unable to assemble sufficient internal policy and administrative capacity to effectively navigate emerging technologies and engagements with global platform companies (Heeks, Reference Heeks, Gomez-Morantes, Graham, Howson, Mungai, Nicholson and Van Belle2021).

2.3. Policy learning

One way in which African states are seeking to build governance expertise—and address the uncertainty about AI impact and the effectiveness of governance actions—is by adopting adaptive policy and regulatory approaches (Taeihagh, Reference Taeihagh2021). These are usually anchored in one or more “soft” law approaches and usually involve an iterative approach to policy development and adjustment as information is received about the effectiveness (or not) of interventions. Some of these strategies include temporary and experimental legislation with sunset clauses; anticipatory rulemaking that draws on decentralised, institution-specific feedback; increased use of data collection and analysis to assess regulatory impact; iterative development of common law; and legal foresighting to explore future legal developments and to develop shared understanding and language about future possibilities (Guihot et al., Reference Guihot, Matthew and Suzor2017: 443–444).

A widely promoted instrument in African AI governance is the regulatory sandbox. This interest in sandboxes may have arisen from previous exposure to them around the fintech space but also in public administration settings.Footnote 6 As the World Economic Forum (WEF) suggests, sandboxes allow for technology to be tested in as “real an environment as possible before being released to the world” (WEF, 2021: 20). A sandbox aims to support cooperation between regulatory authorities and with other stakeholders and to create a controlled environment for testing and understanding the regulatory implications of AI (OECD, 2023: 19).

Another mechanism of interest is an independent evaluation or assurance organisation that can assess the suitability of different AI-based technologies for adoption. For example, the ITU-WHO Focus Group on Artificial Intelligence for Health aims to establish a “standardized assessment framework for the evaluation of AI-based methods for health, diagnosis, triage or treatment decisions.”Footnote 7 At the same time, there are calls for independently produced empirical evidence to demonstrate the safety and efficacy of these technologies, which is critical for their introduction into certain sectors (for example, see Seneviratne et al., Reference Seneviratne, Shah and Chu2019).

Whilst sandboxes and independent assessment may help policy officials grow their understanding of emerging technologies and the possible impact of a specific product on consumers, they provide more limited insights into the systemic effects associated with the introduction of data-driven digital services. In addition, because sandbox interactions are generally quite intimate environments, unless they are carefully designed to incorporate a broader community of participants, along with rules on transparency or disclosure, they can further obscure the already opaque character of technology-regulatory relationships (Wechsler et al., Reference Wechsler, Perlman and Gurung2018). The same applies to pilot projects and public–private partnerships (PPPs) where governments involve “investor” partners to share the costs and risks, often using proprietary technologies or implemented under confidential arrangements and in the presence of immature or non-existent regulatory safeguards. All of which can increase the risk of regulatory capture and lock-in to certain business models (Guihot et al., Reference Guihot, Matthew and Suzor2017). Again, these critiques reinforce the concern that, via these types of partnership models, AI is coming into use through a “state of exception” (McQuillan, Reference McQuillan2015: 568).

Therefore, understanding how AI implementation is (and can be) embedded in existing values and rights-based frameworks (e.g. country constitutions, public service charters, and international commitments) and legal safeguards (e.g. public procurement and data protection regulations) is critical for its ongoing legitimacy and sustainable adoption on the African continent. We must also be aware of path dependence and how established processes and capabilities, and the lack thereof, determine future outcomes; as well as the need for various supportive institutional and governance preconditions to be met for adaptive or dynamic forms of policymaking to be successful (Moodysson et al., Reference Moodysson, Trippl and Zukauskaite2017; Guihot et al., Reference Guihot, Matthew and Suzor2017). A key concern of this paper is the lack of awareness of what exists or has been done previously. Often, with emerging technologies, new “advocacy communities” develop that may not be familiar with work taking place in adjacent sectors and in previous generations of governance. In addition, the institutional memory associated with public officials and politicians may be lost when changing roles or committees across election cycles (Rayner, Reference Rayner2004). To address these issues, policy actors may look to support “institutional entrepreneurship” and policy learning across the wider socio-economic system (Labory and Bianchi, Reference Labory and Bianchi2021: 1836).

3. Methodology

The objective of this paper is to contribute to a greater awareness of established and emerging policy and legislative capabilities relevant to Africa’s AI governance. To do so, we conducted a desktop-based, qualitative review of content contained in public policy documents, legislation, and government websites across twelve African countries. The research process involved two main steps that were implemented and updated iteratively as information sourcing and analysis progressed.

3.1 Development of initial framework

The first step involved the development of an initial framework based on a content analysis of the Policy Areas section of the UNESCO Recommendation. Whilst several other global and regional initiatives provide guidance on AI governance, the UNESCO Recommendation aligns closely with the main basis of our research: that AI development and use should be sensitive to regional diversity, context, and impact, with “due regard to the precedence and universality of human rights” (UNESCO, 2021a: 28) and the primacy of the rule of law (Adams, Reference Adams2022). Whilst the Policy Areas of the UNESCO Recommendation tend to focus on themes relevant to the entity’s mandate, such as education and culture, the text goes broader in considering a range of social and economic instruments and outcomes. In addition, we sourced information from other frameworks and statements, such as the OECD’s Recommendation of the Council on Artificial Intelligence (OECD, 2019b), to expand on the sub-Policy Areas discussed below.

3.2 Sample country and content analysis, with framework review

The second step involved desktop-based policy and legislation sourcing and content analysis, according to the following inclusion criteria:

  • Type of information sources: As indicated by the grey shading in Figure 1, information sourcing was focused on public interest policy and legislation documents published by executive and legislative branches of government—both approved or enacted, and draft or tabled. This means that we excluded material on jurisprudence, intra-organisational or departmental policies, the status of implementation of policy or governance interventions, and evidence of non-governmental or de facto influence on public interest decision-making.

  • Jurisdiction: Only content from the national level of government was included, although national government references or activities related to both international and sub-national levels of governance were included.

  • Date range: The focus of content sourcing was on material published between 2018 and 2023. However, where there were influential pieces of policy or legislation published at any time, such as constitutional provisions on human rights, these were included or referenced.

  • Policy areas: At first, all policy areas from the UNESCO Recommendation framework were explored at a high level to identify domains in which there were potentially interesting cases of existing algorithm, automation, data, ICT, and AI-related governance. Then, a limited number of focus Policy Areas and sub-Policy Areas were selected for in-depth content analysis. The final sub-Policy Areas included were the following:

    1. 1. Strategy and multi-stakeholder engagement: This is a sub-Policy Area of Ethical governance and stewardship and typically included AI, 4IR, ICT, and digital strategies developed out of a ministry of ICT or president’s office; usually with the aim of initiating a cross-cutting programme of work across several sectors and attempting to involve multiple stakeholders in governance activities.

    2. 2. Human dignity and autonomy: This is a cross-cutting sub-Policy Area of Communication and information, Health and social well-being and Gender. Here, the focus was on electronic transactions legislation that enables and defines the boundaries of, automated information processing, including requirements for accountability and human oversight.

    3. 3. Sector-specific governance: This focuses on sub-Policy Areas related to Economy and labour, Public sector adoption, and Environment and ecosystems.

  • Countries: Ten African countries were included in the analysis based on the location of work by research partners, largely around Southern and East Africa: Botswana, Kenya, Malawi, Mauritius, Mozambique, Namibia, Rwanda, South Africa, Zambia, and Zimbabwe. Two additional countries were added to ensure the sample included cases from West and North Africa: Benin and Egypt. Whilst the included countries are biased towards the southern and eastern regions and a single language (English), they do cover a broad spectrum of population and economic sizes and are therefore a useful starting point for comparative analysis.

Figure 1. Policy and legislative landscape with scope of data collection highlighted in grey.

The process of desktop information collection involved identifying and browsing key government department or agency websites for each country, usually the ministry of ICT, ICT regulatory authority, state ICT agency, and personal data protection authority. We also consulted regional and global legislation databases, including ICT Policy AfricaFootnote 8, International Labour Organisation (ILO) NATLEX,Footnote 9 and World Intellectual Property Organization (WIPO) Lex Database.Footnote 10 Finally, we performed an open internet search for AI-related policy or legislative activities for each country. In this way, we were able to locate the majority of relevant documents and, by triangulating across these sources, determine whether we had saturated document sourcing on the most relevant or recent material.

To explore the retrieved documents, and to support inclusion or exclusion decisions, we started with a search for algorithm, automation, and intelligence-related keywords. This allowed us to quickly identify text that had some direct relevance to AI policy or regulatory capabilities. We then performed more in-depth scanning and reading of documents to understand the context of policy or legislation and to identify other relevant sections of text. This was used in the final analysis and compilation of findings below. Initial actions were therefore largely deductive, guided by the UNESCO Recommendation and our initial framework, and became more inductive as content was analysed and the framework was updated.

In the following sections, we provide an overview of the UNESCO Recommendation and our initial framework. We then present results from the content extraction and analysis from the selected countries.

4. Learning from the UNESCO Recommendation

The UNESCO Recommendation was signed by all 193 member states in November 2021 as “a globally accepted normative instrument” to “guide States in the formulation of their legislation, policies or other instruments regarding AI, consistent with international law” (UNESCO, 2021a: 14). Whilst adoption of this type of recommendation by member states is voluntary, by being endorsed at the highest level they are seen as “possessing great authority” and “intended to influence the development of national laws and practices.”Footnote 11 The intention is clear that the statements in the UNESCO Recommendation are meant to set a direction for national AI policy and legislation.

The voluntary nature but also the explicit intent to influence is important to keep in mind as the UNESCO Recommendation is similar to other recommendations in instructing states to adopt quite specific values, principles, and policy actions. This can seem at odds with calls for, and the reality of, more bottom-up governance processes that governments and citizens may be pursuing.

In structure, the UNESCO Recommendation is similar to, for example, the UNESCO Recommendation on Open Science, also published in 2021 (UNESCO, 2021b). The core of the document outlines a set of values that inspire “desirable behaviour,” followed by a set of principles that “unpack the values underlying them more concretely” so that “the values can be more easily operationalized” in the third part: a set of policy statements and actions that states should or must do (UNESCO, 2021a: 18).

In developing its rationale, the UNESCO Recommendation suggests that ethical values and principles “can help develop and implement rights-based policy measures and legal norms” (UNESCO, 2021a: 6). It is therefore deeply invested in ethics as a vehicle for shaping human rights-based policy instruments and law as the main drivers of responsible AI outcomes. Without exploring questions about the role of ethics in AI—especially as it relates to other critical themes such as decolonisation, autonomy, and the politics of intelligence (Adams, Reference Adams2021; Mhlambi and Tiribelli, Reference Mhlambi and Tiribelli2023)—it is notable that the UNESCO Recommendation draws on ethics as a “holistic,” “comprehensive” framework of “interdependent values, principles and actions” to guide society’s response to AI (UNESCO, 2021a: 10). It is therefore ambitious in the breadth of its attempt to define an agenda for responsible AI governance.

Following the discussion in the previous sections of this paper, it is interesting to see how the UNESCO Recommendation approaches some of these themes. For example, as noted earlier, the definition of AI in this document is similar to that used by the OECD by including a broad spectrum of approaches between more symbolic/ human-configured and more neural/ data-driven approaches.

From a sector perspective, as may be expected, the UNESCO Recommendation focuses on AI implications in relation to “the central domains of UNESCO: education, science, culture, and communication and information” (UNESCO, 2021a: 10). Nonetheless, significant weight is given to economic and environmental sectors, amongst others.

Of particular relevance to this paper is whether and/ or how the UNESCO Recommendation provides guidance on governance processes and arrangements. There is a strong emphasis on alignment with international human rights law. It also suggests that the “complexity of the ethical issues surrounding AI necessitates the cooperation of multiple stakeholders” (UNESCO, 2021a: 14). And so there is regular encouragement for multi-stakeholder cooperation and public consultation. In some areas, the guidance is more specific, such as proposing states establish a “national commission for the ethics of AI” (UNESCO, 2021a: 41). Also, focusing on the use of AI by public authorities, governments “should” adopt a regulatory framework that sets out the procedure for ethical impact assessment with associated auditability and transparency, including multidisciplinary and multi-stakeholder oversight (UNESCO, 2021a: 26). Otherwise, aside from references to personal data protection and an emphasis on aligning with international human rights law, there is an occasional mention of existing policy or legislation, and associated governance arrangements or policy learning processes. Most significant is a paragraph under Policy Area 2: Ethical Governance and Stewardship that gives countries a sense of the governance journey; from establishing norms to developing legislation, and learn-while-doing through prototypes and regulatory sandboxes:

In order to establish norms where these do not exist, or to adapt the existing legal frameworks, Member States should involve all AI actors (including, but not limited to, researchers, representatives of civil society and law enforcement, insurers, investors, manufacturers, engineers, lawyers and users). The norms can mature into best practices, laws and regulations. Member States are further encouraged to use mechanisms such as policy prototypes and regulatory sandboxes to accelerate the development of laws, regulations and policies, including regular reviews thereof, in line with the rapid development of new technologies and ensure that laws and regulations can be tested in a safe environment before being officially adopted (UNESCO, 2021a: 28).

More recently, UNESCO has published a Readiness Assessment Methodology (RAM) to assist member states assess their current skills and infrastructure readiness to develop and adopt AI, as well as their institutional and regulatory capacity to “implement AI ethically and responsibly for all their citizens” (UNESCO, 2023: 6). The RAM goes on to explore whether countries have, for example, developed an AI strategy, or if there are indirect influences on AI-related regulation. It also reflects on a range of potentially relevant legislation and policy, starting with data protection but extending to due process and public sector capacity.

Whilst acknowledging that there are likely to be concerns and questions about the UNESCO Recommendation and RAM content, and how it is to be interpreted or implemented locally, these documents do provide a relatively comprehensive narrative to start exploring the landscape of policy and legislation that AI initiatives may intersect with.

Based on this narrative, we developed a first version of a framework that was then updated to a final version during the country analysis, outlined in Table 1. The framework was mainly used to organise the Policy Areas of the UNESCO Recommendation into a quick reference for sourcing, screening, and analysis, and as a placeholder for adding sub-Policy Areas and associated content as the analysis progressed. The following section presents the results of our analysis and highlights potential opportunities for policy learning.

Table 1. Legislation and policy activities identified via Policy Areas in UNESCO Recommendation on the Ethics of AI

5. Learning from policy and legislation in Africa

There is an established and growing history of technology governance in African countries that will be critical to AI adoption going forward. To better leverage existing capabilities for AI, we may look to the experiences and critiques covering decades of information and decision-support system implementation, often in niche applications; from the medical expert systems of the 1980s (Forster, Reference Forster1992) to the growth of personal data protection legislation (Daigle, Reference Daigle2021), the expansion of digital and biometric national identity systems (Breckenridge, Reference Breckenridge, Dalberto and Banégas2021), and wider e-government and statistical or open data programmes (Davies et al., Reference Davies, Walker, Rubinstein and Perini2019).

More recently, there have been investigations into how existing law applies to automated data processing (Makulilo, Reference Makulilo2013) and now more explicitly to AI (Shilongo et al., Reference Shilongo, Gaffley, Plantinga, Adams, Olorunju, Mudongo, Schroeder, Chanthalangsy, Khodeli and Xu2022). For example, a common provision in personal data protection legislation is that data subjects cannot be subject to legal or similar effects arising from a decision based solely on automatic data processing. However, this constraint is limited to decisions and the use of personal data which means that other types of data and forms of data use are not included (Razzano et al., Reference Razzano, Gillwald, Aguera, Ahmed, Calandro, Matanga, Rens and van der Spuy2020: 45).

The interest in AI has elevated discussions about algorithm, data, and technology ethics and helped deepen our understanding of their often unequal impact on different populations. So, the current window of AI-related activity is also an opportunity to improve our algorithm, automation, and data policy awareness and to critically reflect on which governance capabilities need to be strengthened (or new ones created).

5.1 Strategy and multi-stakeholder engagement

The first Policy Area explored is Strategy and multi-stakeholder engagement. A growing number of countries are pursuing national AI strategies. Their character is reflected in Smart Africa’s AI BlueprintFootnote 12 which suggests that an AI strategy is important for demonstrating leadership and setting an inclusive vision at a national level. As a result, one of the key tensions to manage is the role of different national ministries and their cooperation. Nonetheless, ministries of ICT seem to be the main home for AI-related policy; from EgyptFootnote 13 and BeninFootnote 14 to South AfricaFootnote 15 and Rwanda.Footnote 16

The way in which these strategies have involved different stakeholders is similar. Like Mauritius,Footnote 17 Egypt’s strategy was developed by a national council made up of government officials, “independent experts,” and private sector input.Footnote 18 South AfricaFootnote 19 and Kenya’sFootnote 20 reports addressing AI policy direction were also developed by government-academia-private sector advisory bodies.

Soft law and multi-stakeholder mechanisms are at the centre of these countries’ proposed approach to AI governance. Sandboxes are mentioned regularly in strategies and are active in a number of countries, mainly in the fintech space.Footnote 21 The sandbox idea is noted in the Benin National AI and Big Data Strategy where one of the specific objectives is to “update” the institutional and regulatory framework for AI by establishing “a controlled environment for the development of AI initiatives.”Footnote 22 Often these types of initiatives involve private sector or donor support in the form of finance, skills, and technology.

In general, PPPs and donor partnerships are seen as critical to the development of ICT infrastructure, services, skills, and governance capabilities on the continent. Rwanda is establishing various mechanisms in this direction, such as a “shared risk fund” for government AI projects and a co-investment fund for AI start-ups and incubators.Footnote 23 Beyond AI, in Malawi, PPPs are important to the financial services regulatory sandbox hosted by the Reserve Bank of MalawiFootnote 24 and in the development of government e-services.Footnote 25 In addition, the World Bank-financed Digital Malawi Project is coordinated by the PPP Commission (PPPC) and governed by the PPP ActFootnote 26, the Public Procurement ActFootnote 27, and the World Bank’s procurement rules. This arrangement seems to support a relatively transparent procurement and implementation approach with regular updates on the project website about bidding and award activities. In addition, the public is able to raise concerns through a Grievance Redress Mechanism.Footnote 28 There is also some recognition of the need for public officials to be equipped to manage PPPs. Malawi’s Digital Government Strategy emphasises the need to build capabilities in government departments and agencies “for the purpose of managing sourcing of ICT systems/services […] including Vendor & Contract Management, Quality Assurance, Service Level Management.”Footnote 29 Mozambique’s Strategic Plan for the Information Society similarly recognises the need to develop guidelines for public officials on the procurement of ICT programmes and equipment.Footnote 30

Although “multi-stakeholder,” the above mechanisms are largely confined to national technology and finance actors and focus on economic or technology goals. Nonetheless, many of the emerging AI strategies are also explicit about broader ethics principles and the involvement of a more diverse spectrum of entities. Mauritius calls for a “code of ethics about what AI can and can’t do.”Footnote 31 Egypt’s National AI Strategy considers bias that may arise when transferring a model from one context to another, as well as the unequal impact on employees.Footnote 32 In fact, the country’s National Council for AI has gone on to publish a Charter for Responsible AI that seeks to articulate “Egypt’s interpretation of the various guidelines on ethical and responsible AI [including UNESCO and the OECD], adapted to the local context and combined with actionable insights.”Footnote 33 Rwanda is also explicit in its AI Policy about placing “responsible and inclusive AI” at the centre of its mission and in seeking to embed AI ethics in government operations. This is to be supported by “society consultations,” perhaps pointing to a role for more diverse publics to provide input or oversight.Footnote 34

The Benin Strategy makes an interesting point about accountability in suggesting that all government, private sector, and end user stakeholders

…will need to be involved in the design, implementation, and ongoing monitoring and evaluation of the strategy. This principle includes mutual accountability between duty bearers (State, Local Government, Private Sector Leaders, Civil Society Leaders) and rights holders (the general public and other specific beneficiaries).Footnote 35

Many of these countries have also been or are active in the Open Government Partnership (OGP) that has been both a source of learning on technology use, especially around data sharing, and on how to implement multi-stakeholder governance (or the challenges associated with it). For example, the OGP handbook on managing multi-stakeholder forums includes practical examples of different membership and decision-making models, including pointing to cases from Ghana and other developing countries.Footnote 36

Previously, Malawi has signalled support for multi-stakeholder and public participation in its Digital Government StrategyFootnote 37, and Zambia’s Information and Communications Technology Authority has led initiatives focusing on gender inclusion, such as collecting sex-disaggregated data, establishing a Gender and ICT Portfolio, and developing a draft Gender and ICT Strategy.Footnote 38

In South Africa, there seems to be an attempt to drive inclusion at a more structural level through the 2019 White Paper on Science Technology and Innovation (STI), with direct relevance to AI. In addition to investing in core technology research around AI, the White Paper is concerned about potential exclusion or harm resulting from AI, such as “the risk of gender biases being perpetuated through incorporation into AI applications.”Footnote 39 This history with AI ethics dates back more than a decade when the Centre for Artificial Intelligence Research (CAIR) was established, and a central role for a CAIR unit in the development of the UNESCO Recommendation.Footnote 40

Across the continent, there are many other policies and authorities that take a direct stance on inclusion. For example, in Egypt, the National Academy of Information Technology for Persons with Disabilities (NAID) is an affiliate of the Ministry of Communications and Information Technology and a relatively unique entity on the continent. NAID has been exploring the link between AI and assistive technologies.Footnote 41 Moreover, the country’s National Human Rights Strategy was released in 2021 and addresses the emerging impact of technology on human rights (such as cybercrime) as well as the role of technology in supporting the inclusion of women and increasing awareness about human rights.Footnote 42 Similarly, in South Africa, the Human Rights Commission has been supporting work and facilitating workshops on AI since 2021Footnote 43, and there is a range of legislation and entities addressing discrimination and inclusion that could speak to responsible AI adoption.Footnote 44 Rwanda’s Child Online Protection Policy was published four years ago but it is not clear if any of the extensive set of recommendations were implemented and can be applied to AI going forward.Footnote 45

5.2 Human dignity and autonomy

The second Policy Area examined was Human dignity and autonomy. The UNESCO Recommendation outlines quite broad concerns about human interactions with AI and calls on member states to apply ethical AI principles to brain–computer interfaces and to ensure that users can easily determine whether they are interacting with a living person or an AI system imitating a person, and allow them to refuse interaction with an AI system and request human intervention (UNESCO 2021: 37–38).

Perhaps the most mature area of policy or legislation concerning human–technology interaction in African countries is around electronic transactions and evidence. This was driven by the relatively mundane need to ensure that the exchange of email, digital documents, and electronic signatures is legally recognised. For example, Botswana’s 2014 Electronic Communications and Transactions Act (ECTA) determines that a contract is formed “by the interaction of an automated message system and a person, or by the interaction of automated message systems”Footnote 46 and recognises the admissibility of electronic records as evidence in legal proceedings, subject to the Electronic Records (Evidence) Act.Footnote 47 Similarly, Mozambique’s equivalent of an ECTA establishes the legal effect of electronic messages and signatures, including interactions between automated messaging systems “even if no person has verified or intervened.”Footnote 48 Namibia’s ECTA also provides for the admissibility of electronic evidence and procedures for determination of its evidential weight.Footnote 49 Importantly, across most of these countries’ ECTAs, a natural person has the right to correct or withdraw a data message.

In many countries, there are also transaction-related themes in personal data protection legislation. Zambia’s Data Protection Act is typical in defining the rights of data subjects. This includes rights related to interactions between natural persons and programmed or data-driven systems. Specifically, data subjects cannot be subject to legal or similar effects arising from a decision based solely on automatic data processing. In Rwanda, this specifically includes “Profiling” that is closer to how we see current AI working:

use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse and predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements. Footnote 50

Zambia’s Act is similar to Rwanda’s in being relatively explicit about an obligation for human intervention. Data controllers must

implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, including the right to obtain human intervention on the part of the data controller for purposes of enabling the data subject to express the data subject’s point of view and contest the decision. Footnote 51

We see similar sentiments in emerging AI governance. A general guideline in Egypt’s Charter for Responsible AI states:

Final Human Determination is always in place. This means that ultimately, humans are in charge of making decisions, and are able to modify, stop, or retire the AI system if deemed necessary. Individuals with that power must be decided upon by the owner of the system. Footnote 52

The importance of human oversight and determination—both individual and inclusive public oversight—is reiterated throughout the UNESCO Recommendation, as expressed below:

accountability must always lie with natural or legal persons and that AI systems should not be given legal personality themselves (UNESCO, 2021: 28).

How existing legislation addresses accountability in electronic transactions could again be of interest or relevance to AI. There is a specific section in Botswana’s ECTA that is similar to other countries in aiming to protect consumers engaging in automated online transactions (usually the “addressees”) with an “originator.” This includes clarifying the accountability of the “originator” as an “information system programmed by, or on behalf of the originator to operate automatically unless it can be proved that the information system did not properly execute such programming.”Footnote 53 Similar provisions apply to the role of internet service providers (ISPs) in AI. As with MozambiqueFootnote 54 and other countries, Botswana’s ECTA sees electronic communications service providers and similar intermediaries as “mere conduit” if they do not select or modify data received from an originator. In this case, these entities will not have civil or criminal liability for the communication.Footnote 55

Closely linked to the preservation of human dignity and autonomy is the need to address the anthropomorphisation of AI technologies, especially in the “language used to mention them.”Footnote 56 Anthropomorphic language is usually most evident in the micro-level descriptions of how AI works. This often starts with a definition of AI as mimicking the human brain (for the definition of deep learningFootnote 57) or performing “tasks that require intelligence without additional programming; in other words, AI involves human-like thought processes and behaviours.”Footnote 58 Otherwise, the human-like character of AI is depicted across a range of use cases. The Mauritius strategy considers the impact of AI on government services where it is seen to “emulate human performance, for example, by learning, coming to conclusions or engaging in dialogues with people” and being able to “learn, reason and decide at levels similar to that of a human.”Footnote 59 The suggestion that AI “decides” is evident in Egypt’s classification of use cases according to levels of human or technology autonomy. For example:

B. AI decides, human implements

  • Identifying Risks in pipelines (Leakage, breakage, etc.)

  • Staffing & Headcount projections

  • Commodity Price Prediction, and procurement.”Footnote 60

In general, it seems that AI is being depicted in a more deterministic form than most other technologies, with high levels of autonomy and associated risk. Efforts to “control” AI are therefore a prominent theme. As with most countries, Mauritius expects the legal framework to provide “safeguards” or “guardrails” to guide the development and use of technology. Ultimately this is to ensure “that humans have the ability to override artificial intelligence decisions and maintain control over the technology.”Footnote 61 Of broader interest though are attempts to assert regional influence and “values” onto AI that points to a growing agenda to locally define AI and its use at a more fundamental level.Footnote 62

5.3a–5.3c Sector-specific governance

Ultimately, much of the impact of AI governance will be realised in sector verticals. In many policy areas, cross-cutting legislation will be used to initiate sector-specific regulations—whether determined by government or through self-regulation by stakeholders. For example, South Africa’s Information Regulator is approving codes of conduct as a “voluntary accountability tool” to support (not replace) the requirements of the Protection of Personal Information Act (POPIA).Footnote 63 In other countries and policy domains, there are similar examples of sectors having tailored technology governance to their specific needs that could be relevant to AI governance going forward.

Retail and consumer protection

As a relatively early technology adopter, the retail sector has been navigating a range of increasingly complex and significant governance issues related to e-commerce, data, and automation over the past decade. Whilst ECTA-like legislation laid the foundations for the legality of online purchasing, the growing dominance of international accommodation, taxi, and delivery platforms raises questions about consumer welfare and the viability of local competitors. Importantly, the welfare of consumers may be the domain of one or more (and therefore overlapping) authorities and pieces of legislation, including technology (legality and privacy in electronic transactions), data protection (mainly privacy), competition (mainly pricing), and consumer protection (transparency and complaints handling). In addition, there are potentially many other sector-specific consumer protection actors.

From a competition perspective, a report by South Africa’s Competition Commission on the digital economy notes various areas of concern related to the global platform economy, one of which is control over data assets. Another is profit-maximising AI algorithms leading to (possibly unintentional) collusion on pricing.Footnote 64 Finally, there is a concern that platform recommendation algorithms are biased (“self-preferencing”) towards their own goods and services.Footnote 65 All of these competition issues are relevant to the value provided to consumers, and for addressing potential exploitation. In some countries, competition authorities may even be better resourced than data protection authorities to address privacy issues.Footnote 66

From a consumer protection perspective, in Namibia, a National Consumer Protection Policy aims to eliminate unfair and deceptive practices “directed at the poorest and most vulnerable members of society.”Footnote 67 Significantly, the policy recognises the diversity of existing legislation aimed at promoting consumer welfare and protection—from financial services to health and food—and is therefore developing a harmonised legal framework. Implementation would still be by sector regulators but with support from a central Consumer Protection Office. Similarly, the ECTA in Zambia acknowledges the need for cooperation between ICT regulators and overlapping entities, including consumer protection commissions, on issues such as “unfair trading” associated with electronic transactions.Footnote 68

Financial services

The financial services industry has been an advanced technology user on the continent, which has also meant exploring adaptive policy approaches, such as regulatory sandboxes around fintech.Footnote 69 In addition, financial service industry associations have been actively seeking to interpret and apply (or resist) regulations relevant to responsible or ethical AI. For example, the two approved codes of conduct on POPIA are for the Banking Association of South Africa (BASA) and the Credit Bureau Association (CBA).

BASA recognises that its members use automated decision-making “to provide a profile of the data subject” and outlines commitments on transparency and complaints procedures.Footnote 70 In its code, the CBA suggests that members, such as Experian and Transunion, “do not provide decisions or even the basis for a decision on credit in terms of Regulation 23A of the [National Credit Act, 34 of 2005 (NCA)]”Footnote 71 and does not mention the relationship between automated decision-making and profiling. The NCA and associated regulations describe the steps and methods for calculating affordability. Similar considerations apply to the regulation of insurance that prohibits discrimination based on race, age, and gender amongst other possible criteria that could form part of an AI-based risk rating.Footnote 72 Nuances of interpretation in the context of existing legislation and new laws, codes, and guidelines will become more important as AI adoption and impacts increase.

From a practice perspective, the expansion of model use in the banking sector—for financial risk modelling to human resources decision-making (Deloitte, 2021)—and new international guidelines and regulations post-financial crisis have meant that model risk management (MRM) is now an established capability in many African banks. The impact of AI on MRM on the Continent is an emerging issue (Richman et al., Reference Richman, von Rummell and Wuthrich2019). The associated governance frameworks, although from a heavily regulated space, may offer opportunities for learning in other sectors.

Environment and ecosystems

Another field in which there is relatively strong evidence of public interest model governance is environmental impact assessments (EIAs). EIAs are based on regulated procedures of data collection and consultation with affected stakeholders and are therefore a potentially interesting space in which to consider the relationship between models and other forms of knowledge. In South Africa, the regulations around EIAs aim to ensure transparency around the method being used (and associated predictions of impact), whilst giving key stakeholders opportunities to understand the EIA process and outcomes and to make representations to relevant authorities where they have a disagreement.Footnote 73 Closely connected to EIAs are air quality impact assessments that are governed by a “code of practice” on air dispersion modelling. This code provides guidance on approved models that can be used. The code also requires that sufficient information is made available to allow authorities to make appropriate decisions.Footnote 74 EIAs are seen as likely targets for AI in other parts of Africa too. Concerns about AI impact are leading to a renewed interrogation of established algorithmic practices in French-speaking countries of sub-Saharan Africa, with researchers pointing out biases in the Leopold matrix and Fecteau grid as popular EIA tools (Yentcharé and Sedami, Reference Yentcharé, Sedami, Ncube, Oriakhogba, Schonwetter and Rutenberg2023).

More broadly, the environment and ecosystems sector may also provide insights into how data collection and use, as a key element of AI implementation, can be more responsible. One of these insights relates to the empowerment of local stakeholders. For example, a recent commentary has considered how Namibia’s Nature Conservation Amendment Act of 1996 may be applied to data governance (Shilongo, Reference Shilongo2023). The Act makes provision for community conservation governance via the establishment of a Conservancy Committee that not only seeks to ensure that members of a community derive benefits from the use and sustainable management of conservancies and its resources but also that community members within a specific conservation constituency know their rights. Learning from such structures and legislation, an equivalent community data committee and/ or worker could broker relationships with external technology providers and governance actors.

Public sector adoption

Ultimately, many of these sectors are themselves shaped by more generic public administration practices and rules as a sector in itself. A key public sector policy action in emerging AI strategies is the training of public officials. To ensure “Trustworthy AI adoption in the public sector,” Rwanda is establishing capacity-building programmes for civil servants in regulatory, policymaking, or legislative roles, for those expected to use AI, and for those likely to be procuring AI solutions. This is to be supported by matching funding initiatives, hackathons, procurement guidelines, and maturity assessment.Footnote 75 And, finally, there are plans to embed “AI Ethics” functions in ministry and agency digital governance environments.

At the same time, it is important to note that many countries already have established frameworks, policies, and legislation that provide direction to government entities and public officials on ethical issues. In South Africa, the Batho Pele principles in the 1997 White Paper on Transforming Public Service DeliveryFootnote 76 call on public service providers to ensure better provision of information, increased openness and transparency, and opportunities for the public to remedy mistakes and failures; all of which provide a framework for how the rights of people may be protected or fulfilled with increased AI use. Moreover, the law governing administrative justice speaks to the right of persons to “administrative action that is lawful, reasonable and procedurally fair and that everyone whose rights have been adversely affected by administrative action has the right to be given written reasons.”Footnote 77 This is becoming increasingly important for both private and public entities because of the expanding role of (and potential abuse by) digital and AI service providers in South Africa’s public sector (Razzano, Reference Razzano2020).

It is for these reasons that access to information (ATI) legislation has a key role to play in helping the public to understand what and how personal or other information was used in (algorithm-assisted or not) decision-making. African states continue to grapple with the passing of ATI legislation. As recently as 2022, the Namibian National Assembly passed an Access to Information Bill.Footnote 78 Where legislation has been enacted, implementation and enforcement continue to be a challenge, partly due to conflicts with secrecy or similar laws and norms, but also because of the administrative requirements for classifying and releasing information.

Going forward, coordination and accountability are seen as key issues for the public sector. Because AI and ICT more broadly are seen as cross-cutting issues, the ICT ministries and authorities are expected to play a central role in public sector projects and in supporting accountability. For example, in Egypt, the Charter for Responsible AI requires that

Government AI projects, similar to Digital Transformation projects, should be commissioned and supervised by Ministry of Communications and Information Technology (MCIT) in order to ensure compliance with these guidelines and the credibility and quality of data and developers involved in the development of AI systems. MCIT presents periodically on status of those projects to the National AI Council. [Accountability]Footnote 79.

Another example of efforts to assert central oversight and coordination is Kenya’s 2023 Government ICT Standards that define requirements that AI systems deployed in government entities must follow, including aligning with International Organization for Standardization (ISO) standards. This is to “meet critical objectives for functionality, interoperability, and trustworthiness.”Footnote 80

6. Discussion

When looking at AI governance in African countries, it is hard to say what amount of AI exceptionalism is warranted, and where dedicated policies, legislation, and regulatory institutions are needed (Calo, Reference Calo2015). However, by reflecting on existing capabilities, we can start to identify opportunities for policy learning, between sectors and countries.

A key starting point is the role and operation of multi-stakeholder arrangements. These structures can enhance the legitimacy of policy processes, help governments access financial or technical resources, expand access to relevant information, and support oversight. There are also risks and challenges with these structures. Malawi and Mozambique’s experience with PPPs and their efforts to ensure transparency and build the contract management skills of public officials seem relevant to the management of joint AI initiatives. This is especially applicable to AI because of uncertainty about how to procure AI-based technology (Nagitta et al., Reference Nagitta, Mugurusi, Obicci and Awuor2022) and the lack of transparency inherent to its implementation (Rudin, Reference Rudin2019). In addition, there continue to be challenges in establishing and managing multi-stakeholder forums as policy formulation or oversight structures. More explicit attention can be directed to learning from experiences with, for example, national OGP implementations and how these can balance governments’ political and legal responsibility with meaningful involvement of other actors. In addition, there are interesting initiatives aimed more directly at achieving ethical outcomes, such as Egypt’s Charter on Responsible AI that calls for “mutual accountability” between expert leaders and the wider public. More practically, recent activities by the Human Rights Commission in South Africa, supported by discrimination-related legislation, and work by NAID in Egypt suggest that there are opportunities to strengthen the existing work (and oversight/enforcement activities) of inclusion and rights-focused entities on digital, data, and AI.

In other policy areas, we see similar opportunities for policy learning. One of these is at the intersection of electronic transactions, evidence, and personal data protection legislation; mainly around the legality of automated processes, accountability (and liability) for outcomes, and the role of human oversight. In some cases, there is more direct reference to current AI-related practices such as Rwanda’s limits on profiling and predicting behaviour. As noted earlier, the extent to which these areas of legislation need to be explicit about current AI activities and risks versus allowing for a more incremental common or soft law response is a key question (Guihot et al., Reference Guihot, Matthew and Suzor2017; Calo, Reference Calo2015).

In addition, certain sector-specific legislation and policy has already engaged quite extensively with these issues. Consumer protection around electronic commerce, as well as in financial services—especially in credit—have long been concerned with unfair profiling and manipulation of consumer behaviour. Moreover, for many years now the financial sector has needed to address risk related to the use of models. Whilst a very different field, environmental sector EIAs and air quality assessment provide an interesting example of how model management regulation is integrated with wider transparency requirements. The health sector was not included in our analysis, but there is a growing body of Africa-centric research around AI governance that is likely to provide useful insights around specific issues, such as accountability and liability, which could be relevant to other sectors (Bottomley and Thaldar, Reference Bottomley and Thaldar2023).

The environmental and ecosystems policy area also seems to offer opportunities for learning from community-based governance arrangements, as proposed for data governance in Namibia (Shilongo, Reference Shilongo2023). In a similar vein, there could be more investigation into more bottom-up data science and AI initiatives seeking to shape responsible practices, mainly within the developer community. For example, South African digital and data social enterprises, Lelapa AIFootnote 81 and OpenUpFootnote 82, have recently published guidance on responsible design practices, and an Egyptian AI start-up has been exploring the intersection between society and technology from a local perspective.Footnote 83

In general, data governance is one of a few areas that is critical to current ML-oriented AI implementation and a key topic for further research. There are many issues to address around data collection, sharing, analysis, privacy, and security. Possible policy actions could include bottom-up and commons-based models of governance, as explored in the African agriculture space (Baarbé et al., Reference Baarbé, Blom and De Beer2019). And there are established data governance capabilities in national statistical offices (NSOs). NSOs have established processes for managing data quality and have been building expertise around the management of unstructured administrative data—which would be relevant to AI data pipelines.Footnote 84

Other topics for further research include the role and effectiveness of adaptive policy and regulation methods in African technology governance, including regulatory sandboxes, and whether they could be relevant to AI (Guihot et al., Reference Guihot, Matthew and Suzor2017). In addition, the language in many of the emerging AI policies seems to reinforce ideas of technology autonomy, which can undermine the sense of agency amongst (local) governance actors. By looking at how automation, algorithms, and data are talked about in other sectors and countries, we may be able to find better definitions and metaphors to inform our policy and legislative activities (Buiten, Reference Buiten2019). Finally, the role of international norms and law is significant and a key focus (and objective) of the UNESCO Recommendation. The impact of the European Union’s General Data Protection Regulation (GDPR) on African data protection legislation has been widely documented (e.g. Makulilo, Reference Makulilo2013) and has reinforced the role of local DPAs in responsible AI governance (Gwagwa et al., Reference Gwagwa, Kachidza, Siminyu and Smith2021). More recent have been African requests to accede to other AI-relevant EU treaties such as Convention 108+.Footnote 85 Also significant is the African Continental Free Trade Agreement (AfCFTA) becoming operational in 2021Footnote 86 and the coming into effect of the African Union (AU) Convention on Cyber Security and Personal Data Protection in June 2023.Footnote 87

7. Conclusion

The aim of this paper was to develop a richer picture of where existing governance capabilities may exist and to consider pathways for facilitating policy learning around a responsible way forward for AI in Africa. This was done through a close reading of the UNESCO Recommendation on the Ethics of AI and a sample of African data, algorithm, ICT, and AI policy and legislation examples.

As AI intersects with various policy domains, claims of influence emerge around areas of expertise or credibility. From this limited investigation, we see potentially important governance capabilities across a range of cross-cutting areas, such as human intervention in electronic transactions, rights and discrimination-focused legislation, transparent management of digital PPPs, and technology procurement capacity building. We also see established capabilities in sector verticals, such as model management frameworks for banking and air quality monitoring, and consumer protection in e-commerce.

This does not mean there are no challenges with formulating, implementing, or enforcing policy and legislation in these areas. It is for this reason that we should not let the attention on AI distract us from investing in the core digital and data governance capabilities (and associated infrastructures, skills, etc.) that enable the development and adoption of innovative technologies. We can then critically consider whether and when dedicated AI policy, legislation, and institutional arrangements may be appropriate.

More work is therefore needed to understand the language, mechanisms, and processes of policy and institutional learning through comparative policy analysis and in-country research. This will help policy actors understand potential pathways for improving governance capabilities around data, automation, algorithms, and AI.

Acknowledgements

N/A

Author contribution

Conceptualisation and methodology: P.P. and K.S. Investigation: P.P., A.U., and O.M. Writing original draft: P.P., K.S., O.M., M.G., and G.R. All authors approved the final submitted draft.

Data availability statement

N/A

Provenance

This article is part of the Data for Policy 2024 Proceedings and was accepted in Data & Policy on the strength of the Conference’s review process.

Funding statement

This research was supported by a grant to Research ICT Africa from the International Development Research Centre (IDRC) for the African Observatory on Responsible Artificial Intelligence. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interest

The authors declare none.

Footnotes

1 For example, see Kenya—Report of the Distributed Ledgers Technology and Artificial Intelligence Task Force: Emerging Digital Technologies for Kenya https://www.ict.go.ke/blockchain.pdf

2 For example, see South Africa—Minister’s Remarks on Artificial Intelligence (AI) Regulation and Development in Namibia https://www.dcdt.gov.za/minister-s-speeches/329-minister-s-remarks-on-artificial-intelligence-ai-regulation-and-development-in-namibia-4-november-2021.html

3 OECD—Recommendation of the Council on Artificial Intelligence https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

4 European Commission—Ethics guidelines for trustworthy AI https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

5 IEEE—The IEEE Global Initiative on Ethics of Autonomous and Intelligence Systems https://standards.ieee.org/industry-connections/ec/autonomous-systems/

6 African Observatory on Responsible AI—Sandboxes in Mauritius, 8 June 2023 https://www.africanobservatory.ai/social/sandboxes-in-mauritius

7 ITU-WHO Focus Group on Artificial Intelligence for Health https://www.itu.int/en/ITU-T/focusgroups/ai4h/Pages/default.aspx

8 ICT Policy Africa https://ictpolicyafrica.org/

12 Smart Africa - Blueprint: Artificial Intelligence for Africa, 2021 https://smartafrica.org/knowledge/artificial-intelligence-for-africa/

13 Egypt—Egypt National Artificial Intelligence Strategy https://mcit.gov.eg/en/Publication/Publication_Summary/9283

14 Benin—Stratégie Nationale d’Intelligence Artificielle et des Mégadonnées 2023–2027 https://numerique.gouv.bj/publications/documents

15 South Africa—Report of the Presidential Commission on the 4th Industrial Revolution https://www.dcdt.gov.za/documents/reports.html

16 Rwanda—The National AI Policy https://www.risa.gov.rw/publications/policies

17 Mauritius—Mauritius Artificial Intelligence Strategy https://ncb.govmu.org/ncb/strategicplans/MauritiusAIStrategy2018.pdf

18 See p.5 of Egypt—Egypt National Artificial Intelligence Strategy

19 South Africa—Report of the Presidential Commission on the 4th Industrial Revolution https://www.dcdt.gov.za/documents/reports.html

20 Kenya—Report of the Distributed Ledgers Technology and Artificial Intelligence Task Force: Emerging Digital Technologies for Kenya—Exploration and Analysis https://www.ict.go.ke/blockchain.pdf

21 African Observatory on Responsible AI—Sandboxes in Mauritius https://www.africanobservatory.ai/social/sandboxes-in-mauritius

22 See p.19 of Benin—Stratégie Nationale d’Intelligence Artificielle et des Mégadonnées 2023–2027

23 See p.15–17 of Rwanda—AI strategy

24 See p.34 of Malawi—Digital Economy Strategy 2021 https://www.scribd.com/document/516025365/Malawi-Digital-Economy-Strategy

25 See p.46 of Malawi—Digital Economy Strategy 2021

26 Malawi—Public-Private Partnership Act Chap. 46:07 https://malawilii.org/akn/mw/act/2011/27/eng@2014-12-31

27 Malawi—Public Procurement Act Chap. 37:03 https://malawilii.org/akn/mw/act/2003/8/eng@2014-12-31

28 Public Private Partnership Commission, Grievances Redress Mechanism Manual https://www.pppc.mw/downloads

29 See p.34 of Malawi—Digital Economy Strategy 2021

30 See p.5007 of Mozambique—Strategic Plan for the Information Society 2019–2028, Resolution No. 52/2019 https://www.mctes.gov.mz/wp-content/uploads/2021/10/

31 See p.17 of Mauritius—Mauritius Artificial Intelligence Strategy https://ncb.govmu.org/ncb/strategicplans/MauritiusAIStrategy2018.pdf

32 See p.28 of Egypt—Egypt National Artificial Intelligence Strategy

34 See p.1 and p.18 of Rwanda—The National AI Policy

35 See p.12–13 of Benin—Stratégie Nationale d’Intelligence Artificielle et des Mégadonnées 2023–2027

36 OGP—Designing and managing an OGP multistakeholder forum, 2019 https://www.opengovpartnership.org/multistakeholder-forums/

37 For example, see p.15 of Malawi—Digital Government Strategy.

39 See p.24 of South Africa—White Paper on Science, Technology and Innovation, 2019 https://www.gov.za/documents/white-paper-science-technology-and-innovation-1-mar-2019-0000

40 UNESCO’s international expert group begins work on drafting the first global recommendation on the ethics of AI https://www.cair.org.za/group/7/news/unescos-international-expert-group-begins-work-drafting-first-global-recommendation

41 NAID Organizes iCamp Panel Discussion on AI, Modern Tech https://naid.gov.eg/en/press/naid-organizes-icamp-panel-discussion-on-ai-modern-tech

44 For example, see South Africa—Promotion of Equality and Prevention of Unfair Discrimination Act 4 of 2000 and South Africa—National Strategic Framework on Universal Design and Access https://www.lawsofsouthafrica.up.ac.za/index.php/current-legislation

46 See Part III, clause 14 of Botswana—Electronic Communications and Transactions Act No. 14 of 2014 https://www.bocra.org.bw/documents%26legislation

47 Botswana—Electronic Records (Evidence) Act No. 13 of 2014 https://www.bocra.org.bw/documents%26legislation

48 See Chapter 5, Section I, Article 35 of Mozambique—Electronic Transactions Law 3/2017 https://www.portaldogoverno.gov.mz/por/Media/COMUNICADO-DA-16.a-SOCM2023/Lei-de-Transaccoes-Electronicas

49 HIPSSA, Electronic Transactions and Electronic Commerce: Southern African Development Community (SADC) Model Law, 2013 https://www.itu.int/en/ITU-D/Projects/ITU-EC-ACP/HIPSSA/Pages/default.aspx

50 See Article 3:11 of Rwanda—Law N° 058/2021 relating to the protection of personal data and privacy https://dpo.gov.rw/documentations/

51 See clause 62(3) of Zambia—Data Protection Act No. 3 of 2021 https://www.parliament.gov.zm/node/8853

52 See p.3 of Egypt—Egyptian Charter for Responsible AI https://aicm.ai.gov.eg/en/Resources/EgyptianCharterForResponsibleAIEnglish-v1.0.pdf

53 See Part IV, clause 20 of Botswana—Electronic Communications and Transactions Act No. 14 of 2014.

54 Mozambique—Lei de Defesa do Consumidor N.°22/2009

55 See Part VIII, clause 40 of Botswana—Electronic Communications and Transactions Act No. 14 of 2014

56 See p.38 of UNESCO Recommendation

57 See p.9 of Egypt—Egypt National Artificial Intelligence Strategy

58 See p.38 of Kenya—Report of the Distributed Ledgers Technology and Artificial Intelligence Task Force: Emerging Digital Technologies for Kenya

59 See p.62 of Mauritius—Mauritius Artificial Intelligence Strategy

60 See p.57 of Egypt—Egypt National Artificial Intelligence Strategy

61 See p.68 of Mauritius—Mauritius Artificial Intelligence Strategy

62 African Observatory on Responsible AI https://www.africanobservatory.ai/

63 See p.4 of South Africa—Guidelines to Develop Codes of Conduct—Issued under the Protection of Personal Information Act 4 https://inforegulator.org.za/wp-content/uploads/2020/07/InfoRegSA-Guidelines-DevelopCodeOfConduct-22Feb2021.pdf

64 p.30 of South Africa—Competition in the Digital Economy Report 2020 https://www.compcom.co.za/wp-content/uploads/2021/03/Digital-Markets-Paper-2021-002-1.pdf

65 South Africa—Competition Commission Inquiry into platform companies 2021/22 https://www.compcom.co.za/online-intermediation-platforms-market-inquiry/

66 South Africa—Research Working Papers & Webinars, Wits Institute for Social & Economic Research (WISER) https://wiser.wits.ac.za/page/african-digital-competition-research-working-papers-webinars-13755

68 See Part VII, clause 71 (2) of Zambia—Electronic Communications and Transactions Act 4 of 2021

69 African Observatory on Responsible AI—Sandboxes in Mauritius https://www.africanobservatory.ai/social/sandboxes-in-mauritius

70 See p.20 of Code of Conduct for the Processing of Personal Information by the Banking Industry https://inforegulator.org.za/codes-of-conducts/

71 See p.17 of Code of Conduct—Lawful Processing of Personal Information by Credit Bureaus in South Africa https://inforegulator.org.za/codes-of-conducts/

72 See Part 7, clause 7.3(2)(b) of South Africa—Regulations Under the Short Term Insurance Act, 1998 https://www.lawsofsouthafrica.up.ac.za/index.php/current-legislation

73 See, for example, Botswana—Environmental Assessment Act No.10 of 2011 https://www.gov.bw/sites/default/files/2020-02/Environmental%20Assessment%20Act%202011.pdf and South Africa National Environment Management Act 107 of 1998 https://www.lawsofsouthafrica.up.ac.za/index.php/current-legislation

74 South Africa—Regulations Regarding Air Dispersion Modelling https://www.lawsofsouthafrica.up.ac.za/index.php/current-legislation

75 See p.14–15 of Rwanda—The National AI Policy

76 South Africa—White Paper on Transforming Public Service Delivery (Batho Pele White Paper) https://www.gov.za/documents/transforming-public-service-delivery-white-paper-batho-pele-white-paper

77 South Africa—Promotion of Administrative Justice Act 3 of 2000 https://www.lawsofsouthafrica.up.ac.za/index.php/current-legislation

78 MISA, Namibia Moves Closer to Access to Information Law, 2022

79 See p.5 of Egypt—Egyptian Charter for Responsible AI

80 See p.47–48 of Kenya—Systems & Applications Standard ICTA.6.003:2023 https://www.icta.go.ke/ict-standards

81 Responsible AI framework for Data Science Practitioners, Lelapa AI, 7 June 2023 https://lelapa.ai/responsible-ai-framework-for-data-science-practitioners-2/

82 How to innovate with AI and technology in a risk-conscious way, Working Paper II, OpenUp, 10 August 2023 https://openup.org.za/blog/working-paper-ii-risk-and-design

84 For example, see Rwanda—National Institute of Statistics had developed Guidelines for Quality Assessment of Administrative Data https://statistics.gov.rw/publication/guidelines-quality-assessment-administrative-data

85 Convention 108 and Protocols, Council of Europe https://www.coe.int/en/web/data-protection/convention108-and-protocol

86 See more on the AfCFTA Digital Trade Protocol for https://au-afcfta.org/trade-areas/digital-trade/

87 AU Convention on Cyber Security and Personal Data Protection https://au.int/en/treaties/african-union-convention-cyber-security-and-personal-data-protection

References

Adams, R (2021) Can artificial intelligence be decolonized?. Interdisciplinary Science Reviews 46(1–2), 176197.CrossRefGoogle Scholar
Adams, R (2022) Designing a Rights-based Global Index on Responsible AI. Research Reflections. Cape Town: Research ICT Africa. Available at https://africaportal.org/publication/designing-rights-based-global-index-responsible-ai/Google Scholar
Ayentimi, DT and Burgess, J (2019) Is the fourth industrial revolution relevant to sub-Sahara Africa?. Technology Analysis & Strategic Management 31(6), 641652.CrossRefGoogle Scholar
Baarbé, J, Blom, M and De Beer, J (2019) A proposed" agricultural data commons" in support of food security. The African Journal of Information and Communication 23, 133.Google Scholar
Bottomley, D and Thaldar, D (2023) Liability for harm caused by AI in healthcare: An overview of the core legal concepts. Frontiers in Pharmacology 14, 1297353.CrossRefGoogle ScholarPubMed
Buiten, MC (2019) Towards intelligent regulation of artificial intelligence. European Journal of Risk Regulation 10(1), 4159.CrossRefGoogle Scholar
Breckenridge, K (2021) Documentary government and mathematical identification: On the theoretical significance of African biometric government. In Dalberto, SA and Banégas, R (eds.), Identification and Citizenship in Africa: Biometrics, the Documentary State and Bureaucratic Writings of the Self, pp. 4964. Routledge.Google Scholar
Calo, R (2015) Robotics and the lessons of cyberlaw. California Law Review, 513563.Google Scholar
Daigle, B (2021) Data protection laws in Africa: A pan-African survey and noted trends. Journal of International Commerce and Economics, United States International Trade Commission. 1Google Scholar
Davies, T, Walker, SB, Rubinstein, M and Perini, F (2019) The State of Open Data: Histories and Horizons. Cape Town and Ottawa: African Minds and International Development Research CentreCrossRefGoogle Scholar
Deloitte (2021) From Validation to Optimisation, Tackling a Growing Model Landscape: EMEA Model Risk Management Survey. Available at https://www2.deloitte.com/za/en/pages/financial-services/articles/from-validation-to-optimisation.htmlGoogle Scholar
Filgueiras, F, Mendonça, RF, and Almeida, V (2023) Governing artificial intelligence through a sociotechnical lens. IEEE Internet Computing 27(5), 4952.CrossRefGoogle Scholar
Forster, D (1992) Expert Systems in Health for Developing Countries: Practice, Problems, and Potential. Ottawa: IDRC.Google Scholar
Guihot, M, Matthew, AF and Suzor, NP (2017) Nudging robots: Innovative solutions to regulate artificial intelligence. Vanderbilt Journal of Entertainment & Technology Law 20, 385.Google Scholar
Gwagwa, A, Kachidza, P, Siminyu, K and Smith, M (2021) Responsible Artificial Intelligence in Sub-Saharan Africa: Landscape and General State of Play. Artificial Intelligence for Development—Africa. Available at https://ircai.org/wp-content/uploads/2021/03/AI4D_Report_Responsible_AI_in_SSA.pdf (accessed 12 November 2021)Google Scholar
Heeks, R (2002) e-Government in Africa: Promise and practice. Information Polity 7(2–3), 97114. Initially in text as desired.CrossRefGoogle Scholar
Heeks, R (2005) e-Government as a carrier of context. Journal of Public Policy 25(1), 5174.CrossRefGoogle Scholar
Heeks, R, Gomez-Morantes, JE, Graham, M, Howson, K, Mungai, P, Nicholson, B and Van Belle, JP (2021) Digital platforms and institutional voids in developing countries: The case of ride-hailing markets. World Development 145, 105528.CrossRefGoogle Scholar
Labory, S and Bianchi, P (2021) Regional industrial policy in times of big disruption: building dynamic capabilities in regions. Regional Studies 55(10-11), 18291838.CrossRefGoogle Scholar
Larsson, S and Heintz, F (2020) Transparency in artificial intelligence. Internet Policy Review 9(2).CrossRefGoogle Scholar
Makulilo, AB (2013) Data protection regimes in Africa: Too far from the European ‘adequacy’ standard?. International Data Privacy Law 3(1), 4250.CrossRefGoogle Scholar
McQuillan, D (2015) Algorithmic states of exception. European Journal of Cultural Studies 18(4–5), 564576.CrossRefGoogle Scholar
Mehrabi, N, Morstatter, F, Saxena, N, Lerman, K and Galstyan, A (2021) A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54(6), 135.CrossRefGoogle Scholar
Metz, T (2021) African reasons why ai should not maximize utility. In Okyere-Manu, BD (ed.), African Values, Ethics, and Technology: Questions, Issues, and Approaches. Cham, Switzerland: Springer International Publishing, pp. 5572.CrossRefGoogle Scholar
Mhlambi, S and Tiribelli, S (2023) Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms. Topoi, 114.Google Scholar
Moodysson, J Trippl, M and Zukauskaite, E (2017) Policy learning and smart specialization: balancing policy change and continuity for new regional industrial paths. Science and Public Policy 44(3), 382391.Google Scholar
Ndubueze, PN (2020) Cybercrime and legislation in an African context. In Holt, T and Bossler, AM (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance. Switzerland: Springer Nature, pp. 345364.CrossRefGoogle Scholar
Nagitta, PO, Mugurusi, G, Obicci, PA and Awuor, E (2022) Human-centered artificial intelligence for the public sector: The gate keeping role of the public procurement professional. Procedia Computer Science 200, 10841092.CrossRefGoogle Scholar
OECD (Organisation for Economic Co-operation and Development) (2019a) Artificial Intelligence in Society. Paris: OECD. Available at https://www.oecd-ilibrary.org/science-and-technology/artificial-intelligence-in-society_eedfee77-enGoogle Scholar
OECD (Organisation for Economic Co-operation and Development) (2019b) Recommendation of the Council on Artificial Intelligence. OECD/LEGAL/0449. Available at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (Accessed 03 February 2022).Google Scholar
OECD (Organisation for Economic Co-operation and Development) (2021) State of Implementation of the OECD AI Principles. Paris: OECD. Available at https://www.oecd-ilibrary.org/science-and-technology/state-of-implementation-of-the-oecd-ai-principles_1cd40c44-en (accessed 03 February 2022).Google Scholar
OECD (Organisation for Economic Co-operation and Development) (2022) OECD Framework for the Classification of AI Systems. Paris: OECD. Available at https://www.oecd.org/publications/oecd-framework-for-the-classification-of-ai-systems-cb6d9eca-en.htmGoogle Scholar
OECD (Organisation for Economic Co-operation and Development) (2023) Regulatory Sandboxes in Artificial Intelligence. OECD Digital Economy Papers. July 2023. No. 356. Available at https://www.oecd.org/publications/regulatory-sandboxes-in-artificial-intelligence-8f80a0e6-en.htmGoogle Scholar
Plantinga, P (2024) Digital discretion and public administration in Africa: Implications for the use of artificial intelligence. Information Development, 40(2), 332352.CrossRefGoogle Scholar
Radu, R (2021) Steering the governance of artificial intelligence: National strategies in perspective. Policy and Society 40(2), 178193.CrossRefGoogle Scholar
Raso, J (2021) Implementing digitalization in an administrative justice context. In Tomlinson, J, Thomas, R, Hertogh, M and Kirkham, R (eds.), Oxford Handbook of Administrative Justice. Oxford: Oxford University Press.Google Scholar
Razzano, G, Gillwald, A, Aguera, P, Ahmed, S, Calandro, E, Matanga, C, Rens, A and van der Spuy, A (2020) SADC Parliamentary Forum Discussion Paper: The Digital Economy and Society. Cape Town: Research ICT Africa. Available at https://researchictafrica.net/publication/sadc-pf-discussion-paper-the-digital-economy-and-society/Google Scholar
Razzano, G (2020) The public-private: A key legal nexus for South Africa’s AI future. Research ICT Africa, Policy Brief 6. Available at https://researchictafrica.net/publication/the-public-private-a-key-legal-nexus-for-south-africas-ai-future/Google Scholar
Rayner, S (2004) The novelty trap: Why does institutional learning about new technologies seem so difficult?. Industry and Higher Education 18(6), 349355.CrossRefGoogle Scholar
Richman, R, von Rummell, N and Wuthrich, MV (2019) Believing the bot-model risk in the era of deep learning. Presented at the Actuarial Society of South Africa 2019 Convention, 2223 October 2019, JohannesburgGoogle Scholar
Rudin, C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1(5), 206215.CrossRefGoogle ScholarPubMed
Sarker, IH (2021) Machine learning: Algorithms, real-world applications and research directions. SN Computer Science 2(160).CrossRefGoogle ScholarPubMed
Sawhney, H and Jayakar, K (1999) Universal service: Migration of metaphors. In Cherry, BA, Wildman, SS and Hammond, AS IV, (eds.), Making Universal Service Policy. New York, NY: Routledge, pp. 1537.Google Scholar
Seneviratne, MG, Shah, NH, and Chu, L (2019) Bridging the implementation gap of machine learning in healthcare. BMJ Innovations 6(2), 4547.CrossRefGoogle Scholar
Shilongo, K, Gaffley, M, Plantinga, P, Adams, R, Olorunju, N, Mudongo, O, Schroeder, Z, Chanthalangsy, P, Khodeli, I and Xu, L. (2022) Landscape Study of AI Policies and Use in Southern Africa: Research Report HAR/SHS/2022/2. Available at https://unesdoc.unesco.org/ark:/48223/pf0000385563Google Scholar
Shilongo, K (2023) A Community-Based Approach to Data and Tech Governance in Namibia. Mozilla Foundation. June 13. Available at https://foundation.mozilla.org/en/blog/data-and-tech-governance-for-public-interest-a-framework-for-accountability/Google Scholar
Sibal, P and Neupane, B (2021) Artificial intelligence needs assessment survey in Africa. UNESCO. Available at https://unesdoc.unesco.org/ark:/48223/pf0000375322Google Scholar
Smuha, NA (2021) From a ‘race to AI’to a ‘race to AI regulation’: regulatory competition for artificial intelligence. Law, Innovation and Technology 13(1), 5784.CrossRefGoogle Scholar
Swartz, E, Scheepers, CB, Lindgreen, A, Yousafzai, S and Matthee, M (eds.), (2023) Technological Leapfrogging and Innovation in Africa: Digital Transformation and Opportunity for the Next Growth Continent. Edward Elgar Publishing.CrossRefGoogle Scholar
Taeihagh, A (2021) Governance of artificial intelligence. Policy and Society 40(2), 137157.CrossRefGoogle Scholar
UNESCO (United Nations Educational, Scientific and Cultural Organization) (2021a) Recommendation on the Ethics of Artificial Intelligence. SHS/BIO/PI/2021/1. Available at https://unesdoc.unesco.org/ark:/48223/pf0000381137Google Scholar
UNESCO (United Nations Educational, Scientific and Cultural Organization) (2021b) Recommendation on Open Science. SC-PCB-SPP/2021/OS/UROS. Available at https://unesdoc.unesco.org/ark:/48223/pf0000379949Google Scholar
UNESCO (United Nations Educational, Scientific and Cultural Organization) (2022) How to Adopt a Multistakeholder Approach to AI Governance in Southern Africa? Available at https://www.unesco.org/en/articles/how-adopt-multistakeholder-approach-ai-governance-southern-africaGoogle Scholar
UNESCO (United Nations Educational, Scientific and Cultural Organization) (2023) Strengthening Cooperation in Artificial Intelligence in Southern Africa. Available at https://www.unesco.org/en/node/70200Google Scholar
Wairegi, A, Omino, M and Rutenberg, I (2021) AI in Africa: Framing AI through an African Lens. Communication, technologies et développement, (10).CrossRefGoogle Scholar
WEF (World Economic Forum) (2021) The AI Governance Journey: Development and Opportunities. Available at https://www.weforum.org/reports/the-ai-governance-journeydevelopment-and-opportunitieGoogle Scholar
Wechsler, M, Perlman, L and Gurung, N (2018) The state of regulatory sandboxes in developing countries. In Digital Financial Services Observatory. Columbia Institute for Tele-Information. Available at https://dfsobservatory.com/publication/state-regulatory-sandboxes-developing-countries (accessed 03 February 2022)Google Scholar
Yentcharé, PM and Sedami, S (2023) The regulation of artificial intelligence tools for environmental impact assessment: Learning from the practice of impact analysis in French-speaking sub-Saharan Africa. In Ncube, CB, Oriakhogba, DO, Schonwetter, T and Rutenberg, I (eds.), Artificial Intelligence and the Law in Africa. South Africa: LexisNexis, pp. 319342Google Scholar
Figure 0

Figure 1. Policy and legislative landscape with scope of data collection highlighted in grey.

Figure 1

Table 1. Legislation and policy activities identified via Policy Areas in UNESCO Recommendation on the Ethics of AI

Submit a response

Comments

No Comments have been published for this article.