I. Introduction
The AI ActFootnote 1 is the world’s first attempt at comprehensively regulating AI. As is well known, the regulation is claimed to follow a risk-based approach – one that tailors the choice and design of regulatory instruments based on the level of risk, according to the rule: “the higher the risk, the stricter the rules.” To this end, the AI Act distinguishes four risk categories (unacceptable, high, limited, and minimal), defining regulatory requirements based on the risks posed by AI systems.Footnote 2
Recital (26) AI Act points out that:
“In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable AI practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.”Footnote 3
Hence, the underlying objective of the AI Act’s risk-based approach is to strike an optimal (or proportionate) balance between innovation and the benefits of AI systems on the one hand, and the protection of fundamental values such as safety, health and fundamental rights on the other.Footnote 4
Recital (26) of the AI Act refers, particularly, to the principle of (legislative) proportionality enshrined in Article 5(4) TEU. According to this article, the “content and form of Union action shall not exceed what is necessary to achieve the objectives of the Treaties.” Further, the article requires institutions of the Union to “apply the principle of proportionality as laid down in the Protocol on the application of the principles of subsidiarity and proportionality.” This protocol, in turn, states that draft legislative acts shall be justified with regard to the principles of subsidiarity and proportionality, taking into account “any burden, whether financial or administrative, falling upon the Union, national governments, regional or local authorities, economic operators and citizens, to be minimised and commensurate with the objective to be achieved.”Footnote 5
Against this backdrop, the AI Act’s risk-based approach can be seen as a legislative technique for promoting a proportionate system of duties and obligations.
While a risk-based regulation is indeed the right approach to AI regulation, this analysis shows that important provisions of the AI Act do not follow a truly risk-based approach. Yet, this is nothing that cannot be fixed. The AI Act provides for sufficient tools to support future-proof legislation and to implement it in line with a genuine risk-based approach.
To further elaborate this, the following sections are structured as follows: Section II outlines the key elements of risk-based regulation – discussing the notion of “risk,” the distinction between AI risk assessment, impact assessment, and risk management and the key elements of risk-based regulation. Section III criticizes the AI Act, arguing that some of its main provisions are not truly risk-based, leading to over-regulation in some areas and under-regulation in others. In particular, it analyses several problems with the AI Act, such as the lack of a risk-benefit analysis, limited reliance on empirical evidence, and lack of case-by-case risk classification. Section IV examines how the AI Act can be adjusted to better portray elements of a risk-based approach. To this end, the paper analyses the relevant instruments to implement the AI Act, such as guidelines, delegated and implementing acts, codes of practice, and harmonised standards. Finally, Section V draws conclusions on what lawmakers outside the EU can learn from the AI Act in regulating AI.
II. Key elements of risk-based regulation
1. The notion of risk
The emergence of risk-based approaches to regulating AI systems in the EU and elsewhere in the worldFootnote 6 raises, first and foremost, the question as to what policymakers (and the EU in particular) mean when they talk about risk. Generally speaking, “risk” is the likelihood that a source of hazard will turn into actual harm.Footnote 7 Based on this understanding, Article 3 No. 2 AI Act defines risk as “the combination of the probability of an occurrence of harm and the severity of that harm.” As such, risks are usually distinguished from uncertainties.Footnote 8 While risks are “known knowns” with statistical probabilities and quantifiable effects, uncertainties are “known unknowns” that cannot be quantified because we do not know what the effects of a particular technology might be. In addition, there are “unknown unknowns,” where we are not even aware that things or activities may have adverse effects at all.Footnote 9
One of the most pressing issues for any risk-based approach is how to reach consensus on what risks to watch out for and how serious they are considered to be (either in terms of probability or impact, or both).Footnote 10 There was a lively debate, during the negotiations on the AI Act, about the criteria for determining what AI practices should be banned in the EU, as well as which AI systems should be classified as “high-risk” and thus allowed on the market if certain safeguards are put in place.Footnote 11 This illustrates how difficult this endeavour can be; especially if this assessment is not sufficiently based on empirical evidence and a sound methodology.Footnote 12
2. The elements of risk-based regulation: risk assessment and categorisation, impact assessment and risk management
Typically, risk-based approaches to (AI) regulation consist of various elements or phases, namely – risk assessment and categorisation, impact assessment and risk management.Footnote 13
At a high level, we can first distinguish between (i) the assessment of risks arising from the use of AI and (ii) the classification of AI systems or applications by risks.Footnote 14 The first type assesses the risks posed by the use of AI, which may include risks to safety and health, bias and discrimination, lack of fairness, lack of transparency, invasion of privacy and data protection rights, or other protected interests. In the second type, the assessor looks at the risks associated with the use of AI in order to classify the system into a category of risk. This classification process helps stakeholders prioritise their efforts and allocate resources more effectively, contributing to the determination of obligations in line with the extent of risk posed by AI systems.
An impact assessment, on the other hand, goes further than a risk assessment. While a risk assessment is about the identification, analysis and evaluation of AI-related risks, an impact assessment seeks to evaluate the wider impact of AI systems on several stakeholders – including users, society, and the environment, going beyond the mere discovery and analysis of risks. This usually entails a review of governance, performance, communication, threats to safety and security, and other protected interests.
Based on such an impact assessment, risk-based approaches to AI regulation usually also contain requirements for a risk management, which includes the determination, evaluation, and ranking of risks related to AI as well as putting policies in place to reduce, track, and manage the possibility of unforeseen events.
In the AI Act, all of the above-mentioned elements of risk-based regulation are present.Footnote 15 The AI Act assesses the risks posed by the use of AI and categorises them (mostly on a top-down basisFootnote 16 ) as unacceptable, high risk, limited risk, and minimal (or no) risk. Moreover, Article 9 AI Act requires providers of high-risk AI systems to establish a risk management system which includes an impact assessment to identify and analyse foreseeable risks that the high-risk AI system may pose to health, safety, or fundamental rights, in order to take appropriate and targeted risk management measures designed to address the identified risks.
3. Essential features of risk-based regulation
Risk-based approaches to AI regulation can take different forms, depending on the ultimate goal of the regulator. As Coglianese Footnote 17 points out, such a regulation may aim to:
-
Eliminate all risk (the zero-risk approach)
-
Reduce risk to an acceptable level (the acceptable risk approach)
-
Reduce risk until costs become unbearable (the feasibility approach) or
-
Strike a balance between risk reduction and costs of regulation (the proportionate or efficiency approach).
As discussed above,Footnote 18 the AI Act is based on the idea of proportionate regulation – aimed at striking an optimal (or proportionate) balance between reducing the risks posed by the use of AI systems on the one hand, and innovation and the benefits of AI systems (or the costs of regulation) on the other.
Accordingly, the question arises as to what key elements a legislator must consider in adopting such an approach. Arguably, these elements include:
-
Risk-benefit analysis: When assessing the risks of AI, it is necessary to look, beyond the possible harms that AI systems can cause, at their innovative economic and social benefits. After all, “risk” is something we take in the name of benefit; we don’t typically choose to be harmed. Instead, we – as a society – choose to take certain risks in the name of current and potential societal gains.Footnote 19 Therefore, in order to assess which risks are acceptable and which risks present the possibility of unacceptable harm, a consistent application of the risk-based approach requires a thorough consideration of, not only the negative consequences, but also the positive contributions that AI brings to individuals and society. Such a risk-benefit analysis must include, in particular, the (opportunity) costs of underuse. As the European Parliament already pointed out in 2020, this underuse can also be considered a “major threat.” Missed opportunities for the EU to use AI systems “could mean poor implementation of major programmes, such as the EU Green Deal, losing competitive advantage towards other parts of the world, economic stagnation and poorer possibilities for people.”Footnote 20
-
Technology Neutrality: A true risk-based approach regulates the risks of applications, not the technology itself. This principle of “technology neutrality” has been recognised by many regulators around the world,Footnote 21 including the EU,Footnote 22 as an overarching principle for ICT regulation. The main aim of this principle is to ensure an equal treatment of technologies with equivalent effects, and to future-proof the law, i.e. to draft legislation in a way that is flexible enough not to impede future technological development and to avoid the need for constant legislative revision.Footnote 23
-
Evidence-based Risk Assessment and Categorisation: Another important feature of risk-based regulation is that the assessment and classification of risks requires sufficient empirical evidence and a clear methodology. As AI-related risks are being used to justify governmental regulation, there needs to be a common way how to assess and classify these risks. Accordingly, regulators, standard bodies and other stakeholders have been working for many years on risk assessment frameworks and science-based methodologies. Arguably, risk-based regulation works best on quantifiable problems. However, many harms are either not quantifiable at all, or represent a mixture of quantifiable issues with hidden policy choices.Footnote 24 In such cases, the question arises as to whether a risk-based approach is appropriate at all or whether other regulatory techniques (such as a rights-based approach) should be adopted instead.Footnote 25
-
Proportionate Regulatory Burden: Ideally, obligations and other regulatory burdens should be proportionate to the risks posed by AI applications to ensure that regulatory requirements are aligned with the potential harm and impact of AI systems. Risk-based regulation, therefore, seeks to create a legal framework in which legal obligations are tailored to the specific risks posed by the use of a particular AI system for a given purpose, in order to avoid overburdening of the regulated actors.
-
Flexibility and adaptability: Risk-based regulation must also be flexible enough to adjust retrospectively if it turns out that the original risk assessment or categorisation was wrong. As Black puts it, “[r]esponding to risks and attempting to manage them necessarily involves anticipating the future” which by its very nature is unknown.Footnote 26 Especially with new technologies such as AI, it is impossible to reliably assess the risks and benefits of AI systems deployed in given areas ex ante. For this reason, a truly risk-based regulation must require the legislator, the regulator and those who manage and mitigate risks to monitor the performance of AI systems throughout their lifetime, periodically re-evaluating risks and implementing the necessary corrections.
Clearly, some caveats are necessary. First, it is important to note that the criteria listed above are by no means exhaustive. Also, it is essential to remember that risk-based approaches to regulation have their difficulties – such as how to specify, aggregate and quantify risks, how to reconcile conflicting values and how to set levels of acceptable risk – as well as limitations – such as an overreliance on quantification, how to deal with unquantifiable and, in particular, unknown risks, and how to take into account the risk of harm to individuals.Footnote 27 As a result, risk-based regulation usually needs to be complemented by additional set of rules.
These legitimate concerns should not, however, be used as an overall argument against this type of regulation. In fact, risk-based regulation has a number of strengths when properly implemented.Footnote 28 First, it rationalises government intervention by setting clear priorities and objectives. Second, it facilitates the effective use of scarce resources and allows regulators – if implemented according to a true risk-based approach – to focus compliance efforts on products/services and/or systems that pose the greatest risks. Last, but not least, risk-based regulation can be an effective tool for striking the right balance between the benefits and risks of a particular technology.
III. Is the AI act a truly risk-based regulation?
While risk-based regulation is indeed the right approach to AI regulation, important elements of the AI Act do not follow a truly risk-based approach.
1. Protecting fundamental rights with a risk-based approach?
One fundamental problem is that the AI Act with its risk-based approach seeks to protect – not only health and safety, but also – the fundamental rights of citizens. This is troubling for a number of reasons.
First, the European Union has no general competence to harmonise Member State’s laws to protect human rights. As a result, the AI Act “shoehorns”Footnote 29 the protection of fundamental rights into the scope of Article 114 TFEU, which gives the EU the competence to remove barriers to trade in the internal market. However, such an approach is very likely to fail, because it does not have the protection of rights as its primary goal, but rather the opening and shaping of markets.Footnote 30
Moreover, the EU’s decision to protect human rights in the AI Act primarily through a risk-based approach, rather than of a rights-based approach,Footnote 31 is generally ill-suited. Most importantly, such an approach neglects the minimum and non-negotiable nature of human rights. Instead, the AI Act, with its risk-based approach and its fundamental rights impact assessment, implies that fundamental rights violations can be quantified and measured in degrees. This is, however, not the case. As Yeung & Bygrave point out, while it is possible to speak of different levels of culpability, scale, and magnitude when talking about fundamental rights, “these variations do not imply that fundamental rights violations can be, without problems, ranked on a sliding scale from trivial to serious.”Footnote 32 Instead, fundamental rights follow a binary logic in that an activity is either legal or illegal.Footnote 33
Finally, risk is typically assessed at the level of the collective/society and not for the individual. Rather than preventing individual harm, risk thinking assesses harm at a social level or a society-wide scale. One of the consequences of this aggregate nature of risk is that individual differences are typically ironed out, as risk analysis often determines acceptable risks by looking at the average citizen.Footnote 34 Another consequence is that risk-based regulation often involves society-wide trade-offs (e.g. between fairness and efficiency), with the result that even immense individual harms may be dismissed.Footnote 35
For all these reasons, a risk-based approach is difficult to reconcile with the protection of fundamental rights.
2. Missing risk-benefit analysis
Another problem with the AI Act is that it lacks a risk-benefit analysis – a fundamental component of a truly risk-based approach to regulation.
As outlined above,Footnote 36 a risk-benefit analysis involves assessing the potential risks and benefits of a particular action or technology to determine whether the benefits outweigh the risks. In addition to determining whether a system has the potential to cause harm and the severity of the likely harm, the analysis must also consider the benefits and (likely) positive outcomes of using a system – such as advancing scientific discovery.Footnote 37 Otherwise, there is no appropriate framework for a proportionate and balanced regulatory regime.
In sharp contrast to this, the AI Act does not consider the potential benefits of AI systems alongside the risks they pose. Instead, it focuses primarily on preventing risks and threats to health, safety and fundamental rights, without considering the potential positive impacts of AI systems.
Besides undermining the idea of truly risk-based regulation, this approach ignores the positive contributions of technology and may also result in missed opportunities for societal progress and innovation. In other words, by not considering the positive aspects of the technology in the regulatory framework, the AI Act fails to harness the potential of AI systems to improve the common good.
This absence of a risk-benefit analysis in the AI Act also makes it difficult to strike an appropriate balance between potential risks and benefits in dilemma situations. This is illustrated by an example from health care,Footnote 38 where it is currently unclear whether AI-based medical devices should have a minimum level of transparency before they can be released to the market. Some data scientists argue that regulators should only allow inherently interpretable algorithmic models while banning AI systems with algorithmic opacity that cannot be technically resolved.Footnote 39 However, studies show that some opaque AI systems (e.g. deep neural networks) have a much higher degree of accuracy and efficiency than transparent systems (e.g. deductive and rule-based systems).Footnote 40 In such a situation, a trade-off between AI’s accuracy and transparency must be made.Footnote 41
The Medical Device Regulation (MDR)Footnote 42 provides for exactly such balance, allowing certain risks to be recognised as acceptable if they are outweighed by the corresponding benefits (Annex I No 4 MDR). Thus, the inherent algorithmic opacity of a medical device may be considered an acceptable risk, if the manufacturer can demonstrate that the benefits of using such a device outweigh the risks.
Whether such a trade-off is also possible under the AI Act – which (in the case of AI based medical devices requiring a third party conformity assessment) applies simultaneously to the MDR – is unclear, given that the AI Act considers only possible risks and their prevention. Therefore, the AI Act does not provide a clear answer to the question of whether a certain degree of algorithmic opacity can be considered an acceptable risk in light of the benefits of using AI. Obviously, the intention of the EU legislator was not to eliminate all risks (the zero-risk approach), but to strike a balance between risk reduction and costs of regulation (the proportionate or efficiency approach), as discussed in Section I. Therefore, a truly risk-based implementation of the AI Act requires striking a balance between algorithmic transparency and efficiency/accuracy.
3. Limited reliance on empirical evidence
Another reason why the AI Act is not truly risk-based is its limited reliance on empirical evidence for the design of the different risk categories. As scholars have pointed out,Footnote 43 the AI Act does not establish criteria for when AI poses an unacceptable risk to society and individuals. Instead, it merely provides a set list of categories of AI systems that are considered to pose “unacceptable risks” and are therefore banned in the EU.
For high-risk AI systems, Article 7(2) AI Act sets out the criteria to be taken into account by the European Commission when amending the list of Annex III high-risk AI systems. However, neither the recitals of the AI Act nor any accompanying EU document explain and justify how these criteria were applied to identify the areas listed in Annex III in the first place. As Grozdanovski and De Cooman conclude, when choosing which risks to address, “regulators were generally disinterested in statistical evidence on the possibly harmful features of various systems.”Footnote 44
This applies both to the Commission’s original proposal which failed to gather empirical evidence for the design of the AI Act, and to the subsequent legislative process.
Instead of conducting its own practical studies in concrete use cases of AI, the European Commission relied heavily on public consultationsFootnote 45 for its proposal, despite acknowledging that “robust and representative evidence for harms inflicted by the use of AI is scarce due to the lack of data and mechanisms to monitor Al as a set of emerging technology.”Footnote 46 Inconsistencies in reported participation numbers as well as the methods employed during the consultation – in particular, the use of close-ended questions and pre-suggested answers – cast further doubt on the accuracy and representativeness of the data collected.Footnote 47 Moreover, there is a discrepancy between the results of the consultation and the final proposal. In certain areas, the proposal deviates from the consensus expressed by the respondents.Footnote 48
The political nature of the definition of risk categories was also evident in the subsequent trilogue negotiations. The Council and the European Parliament proposed new prohibited AI practices and new areas for high-risk AI systems, but with little or no justification as to why these were chosen.Footnote 49
All these lead to the conclusion that the supposedly “risk-based” nature of the Act is neither based on practical evidence nor justified by externally verifiable criteria, but is the result of a political compromise at a particular point in time and is therefore largely arbitrary.Footnote 50
4. Pre-defined, closed risk categories
A closely related problem concerns the framing of the risk categories themselves.
At first glance, many of the pre-defined categories do not make sense. For instance, the AI Act does not apply to the most dangerous applications – such as killer robots (Article 2(3) AI Act); AI systems developed in the EU to provide support to dictators or hackers outside the EU (Article 2(1)(c) AI Act);Footnote 51 and autonomous vehicles, drones/airplanes and vessels (Article 2(2) in conjunction with Annex I.B. AI Act).
On the other hand, many applications qualify as high-risk AI systems under Annex III, simply because they are used in a particular sector, even though they do not pose a serious risk of harm, such as tools to detect duplicates in datasets or tools to improve language.Footnote 52 While it is true, that in both of these cases providers have the possibility under Article 6(3) sub 2(a)-(b) AI Act to demonstrate that their systems do not qualify as high-risk, it does not change the fact that these tools can be covered by Annex III and only exceptionally exempted, which places the burden of proof (and documentation, Article 6(4) AI Act) on the provider.
At a more fundamental level, the examples discussed point to the real problem. The AI Act provides a broad and rather abstract classification of high-risk systems under Annex III. Instead of providing a risk classification on a case-by-case basis, it uses a pre-defined (closed) list of typical high-risk applications. Whether an AI system used in a specific sector for specific purposes poses a high risk to health, safety and/or fundamental rights, is not assessed for the concrete risk, but is pre-defined for typical cases in Annex III. Accordingly, the risk management system required by Article 9 AI Act is only obligatory in situations that are already classified by the AI Act as high-risk cases. As a result, the risk management obligations of providers under the AI Act consist mainly of risk mitigation rather than risk assessment.Footnote 53
The choice of such a top-down regulation raises several issues. First, this approach leads to over-regulation where, for instance, an AI system falls into one of the eight categories listed in Annex III, but in reality does not pose a significant risk of harm. Second, the list of typical high-risk AI systems (albeit with broad definitions and open to updating) may not be easy for the European Commission to keep up to date in a timely manner, given how rapidly AI technology is evolving.Footnote 54 Moreover, the decision to delegate (to the Commission) the power to amend Annex III by adding, modifying and removing high-risk AI systems (Article 7 AI Act) raises concerns in terms of power allocation.Footnote 55
Finally, the focus on a pre-defined list of high-risk AI systems also creates a sharp rift between this category and other lower-risk categories that are largely unregulated (with the exception of transparency requirements, Article 50 AI Act). In particular, such a rigid distinction is not justified in cases where an AI system is used in a specific sector (e.g. healthcare sector) but does not qualify as high-risk (e.g. because the system does not qualify as a medical device according to Article 6(1), Annex I.A.11 AI Act and the MDR),Footnote 56 but nevertheless poses numerous risks (e.g. to patients and care recipients due to its direct and indirect effects on the human body and mental health).Footnote 57
5. Regulation of GPAI models as a contradiction to the risk-based approach?
Moreover, the specific obligations of systemic risk GPAI model providers to conduct model evaluations, assess and mitigate potential systemic risks, monitor and report serious incidents, take corrective action, and ensure an appropriate level of cybersecurity measures (Article 55 AI Act), are not consistent with a genuine risk-based approach.
First, by definition, a GPAI model is not limited to specific use cases or applications. Instead, such a model “displays significant generality and is capable of competently performing a wide range of distinct tasks” so that it “can be integrated into a variety of downstream systems or applications” (Article 3 No. 63 AI Act). Given that GPAI models can be used widely across a variety of industries, it is difficult (if not even impossible) to formulate precise standards for classifying risks as systemic. Therefore, the AI Act does not specify which risks are systemic; instead, Article 3 No. 65 AI Act refers (generally) to negative effects on public health, safety, public security and fundamental rights. Thus, providers have no guidance on what constitutes a systemic risk and how to mitigate it. Arguably, providers can demonstrate compliance through codes of practice which will be facilitated by the AI Office until a harmonised standard is published (Article 55(2)(1) AI Act). However, this does not change the fact that “systemic” risks cannot be quantified and specified in the same way as other risks regulated by the AI Act, because they do not concern the probability of the occurrence of a certain harm (cf. Article 3 No. 2 AI Act), but rather the impact on the “union market” or the “society as a whole” (cf. Article 3 No. 65 AI Act).
Such “risk” regulation has nothing in common with the type of risk-based regulation described above in Section II.
In addition, the AI Act assumes that a particularly high amount of computation used to train GPAI models also increases their risk (Recital 111 AI Act). Therefore, Article 51(2) AI Act states that a GPAI model is presumed to have high-impact capabilities – such that qualifies it as a model with “systemic risk” under Article 51(1)(a) AI Act – if the model’s training involves more than 10^25 floating-point operations (FLOPs). However, such a threshold is questionable for at least three reasons:
-
First, the (systemic) risk of GPAI models depends, not only on the quantity of computational resources used, but also on a number of other factors – such as the context of the application, the model architecture, and the quality of the training.Footnote 58
-
Second, research shows that the 10^25 FLOPs threshold is questionable, since LLMs with fewer FLOPs can be just as risky and even outperform larger models with more parameters.Footnote 59
-
And third, the FLOPs threshold was set primarily for political reasons: in order to strengthen the European economy with its two start-ups Mistral (from France) and Aleph Alpha (from Germany), France and Germany in particular successfully lobbied during the negotiations to keep both companies below the threshold.Footnote 60
This displays, once again, how arbitrarily the AI Act defines “systemic” risks.
6. Overly broad AI definition
Another major setback from the perspective of risk-based regulation is the overly broad definition of AI.
Initially, the European Commission justified its Proposal with the specific characteristics of (unpredictable) software systems based on machine learning.Footnote 61 Accordingly, many of the AI Act’s mandatory requirements and obligations for high-risk AI systems attempt to mitigate mainly the risks of AI systems based on machine learning, while such far-reaching obligations are not strictly necessary for other software systems.
From a truly risk-based approach as well as from the principle of technology neutrality, the AI Act should have, therefore, imposed different regulatory burdens on different designs, because predictable AI systems do not pose the same risks as unpredictable systems based on machine learning.Footnote 62
However, this is not the case. According to Article 3 No. 1 AI Act, an AI system is “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”Footnote 63 This definition – an adaptation of the OECD’s updated definition of AIFootnote 64 – extends the meaning of AI systems to cover (almost) all software systems,Footnote 65 as (i) there is no threshold for the level of autonomy required for a system to be classified as such, and (ii) the use of the word “may” implies that systems do not always have to exhibit adaptiveness after deployment, to be considered AI. Hence, the AI Act applies not only to machine learning, but also to logic- and knowledge-based approaches (recital 12 AI Act).
As a result, even deterministic software systems used in high-risk sectors are subject to the highest requirements. Consequently, as Schrepel puts it: “By not discriminating between AI systems based on their functioning, the AI Act indirectly sanctions those that are safer and easier to control”Footnote 66 – contrary to the components of a true risk-based approach.
7. Double regulatory burdens due to the horizontal approach
The AI Act does not replace existing EU law, but applies concurrently to it. As a result, companies and individuals will have to observe not only the AI Act, but also other related legislations – such as EU data protection law (Article 2(7) AI Act), EU copyright law, EU consumer and product safety law (Article 2(9) AI Act). As many principles and provisions of the AI Act overlap with those of pre-existing legislations, such a horizontal approach inevitably leads – in many areas – to legal uncertainty, different interpretations, contradictions and, ultimately, to double regulatory burdens – contrary to the idea of risk-based regulation.
Consider the following three examples from data protection law, medical law and product safety for machinery:
-
(i) As the AI Act and EU data protection law apply in parallel, both the EDPB and the EDPS have already pointed out during the negotiations, that it is important to clearly avoid any inconsistencies and possible conflicts between the AI Act and data protection law.Footnote 67 However, these concerns have largely not been taken into account. For example, both the GDPR and the AI Act impose transparency obligations, but the scope and the requirements are regulated differently in the two laws.Footnote 68 Another example is the right to explanation and human intervention/oversight. While the GDPR requires human intervention (Article 22(3) GDPR) and a right to meaningful information for decisions based solely on automated processing, including profiling (Article 15(1)(h) GDPR), the AI Act requires human oversight (Article 14 AI Act) and a right to explanation of individual decision-making (Article 86 AI Act) for high-risk AI systems – whereby both the content as well as the prerequisites and legal consequences are regulated completely differently.
-
(ii) The relationship between the AI Act and the Medical Devices Regulation (MDR) is also currently unclear.Footnote 69 Given that both legislations apply simultaneously, without a formal hierarchy clause in either the AI Act or in the MDR to decide which of the overlapping rules should apply, a number of inconsistencies and contradictions arise. For example, its definitions – such as “importer,” “putting into service,” “provider” and “deployer” – differ from those of the MDR.Footnote 70 These differences do not only complicate compliance with both regulations, but will also make it very difficult for providers to integrate the documentation required under the AI Act into the MDR documentation “to ensure consistency, avoid duplication and minimise additional burdens” (cf. Art. 8(2)(2) AI Act).
-
(iii) A third example is the new Machinery Regulation (MR),Footnote 71 which applies alongside the AI Act when an AI system is used in a machine. This also results in a duplication of requirements. For example, both the AI Act and the MR require human oversight, but the two sets of rules differ in detail.Footnote 72 Another overlap and contradiction concerns the recoding and retention of decision-making data. While the MR requires “enabled” recording of “data on the safety-related decision-making process” the AI Act requires that high-risk AI systems automatically record logs of “events” throughout the system’s lifespan. This discrepancy is likely to create practical challenges for companies to ensure compliance with both regulatory frameworks, resulting in double regulatory burdens.
All three examples show that the AI Act will create inefficiencies, regulatory uncertainty and increased compliance costs, due to conflicting or duplicative requirements in both the AI Act and other EU laws.
8. Overlap of enforcement structures
Since the AI Act applies in addition to other existing EU laws, there is also a risk that the same use of an AI system may be subject to different regulatory authorities within a Member State. Article 70 AI Act leaves the designation of competent authorities to the Member States.Footnote 73 Member States may choose to entrust the enforcement of the AI Act either to existing bodies (such as national data protection authorities) or to entirely new administrative bodies (such as the Agency for the Supervision of AI in Spain). This will most likely lead to overlapping enforcement structures, duplication of procedures, inconsistencies between these procedures and, in the worst case, double fines for the same set of facts. As a result, coordination between the different regulatory bodies is necessary to avoid double and/or over-enforcement, which is contrary to the constitutional principles of ne bis in idem (Article 50 EU Charter of Fundamental Rights) and the principle of proportionality.Footnote 74
In the absence of specific provisions on cooperation mechanisms in the AI Act and other EU legislation, the overlap of enforcement structures will be a significant challenge, increasing legal uncertainty and compliance costs – again, contrary to the risk-based approach.
IV. How to implement the AI act in accordance with a truly risk-based approach
The foregoing analysis shows that key provisions of the AI Act do not reflect the characteristics of a true risk-based regulation; leading to legal uncertainty and potential over-regulation, as well as unjustified increases in compliance costs. However, this is nothing that cannot be fixed. The AI Act includes sufficient tools to support future-proof legislation and to implement it in line with a genuine risk-based approach. Accordingly, the following analysis focuses on how to implement the AI Act in accordance with a truly risk-based approach.
1. The risk-based approach as a guiding principle for implementing the AI act
The implementation of the AI Act can take several forms. In this paper, the term “implementation” is used to describe the measures taken to ensure compliance with the obligations imposed by the AI Act. Implementation includes:
-
the adoption of more specific legal provisions (normative implementation),
-
the interpretation, application, and enforcement of the AI Act by public authorities (administrative implementation) and
-
its interpretation by the courts (judicial implementation).
Implementation can take place both at the level of the EU (e.g. when the European Commission issues guidelines or adopts implementing/delegated acts) and at Member States’ level (e.g. when Member States adopt more specific rules or when Member State authorities enforce the AI Act).
When implementing the AI Act, both the European Commission and the Member States are required to respect the choice of the European legislator in following a risk-based approach. This follows both from the preparatory workFootnote 75 and from the ratio legis as laid down in recital (26) AI Act as explained above in Section I.1.
Apart from the ratio legis of the AI Act itself, an interpretation of the AI Act in the light of EU primary law also supports the view that the implementation of the AI Act should follow a truly risk-based approach. As explained above, the principle of (legislative) proportionality is enshrined in Article 5 TEU. The CJEU has consistently held that:
“the principle of proportionality is one of the general principles of Community law. By virtue of that principle, the lawfulness of the prohibition of an economic activity is subject to the condition that the prohibitory measures are appropriate and necessary in order to achieve the objectives legitimately pursued by the legislation in question; when there is a choice between several appropriate measures recourse must be had to the least onerous, and the disadvantages caused must not be disproportionate to the aims pursued.”Footnote 76
As a result, both secondary law (ratio legis of the AI Act) and primary law (principle of legislative proportionality) favour the implementation of the AI Act on the basis of a truly risk-based approach.
2. Tools in the AI act to support future-proof legislation
Although some key provisions of the AI Act are not consistent with a truly risk-based approach, the Regulation provides for sufficient tools to interpret, specify, and even amend it accordingly.
First, many provisions contain broad language that is subject to interpretation. To this end, the Regulation relies, not only on the courts (and ultimately on the CJEU), but also on the European Commission and its AI Office to develop guidelines (Article 96 AI Act).
Moreover, Article 97 AI Act gives the European Commission the power to adopt delegated acts, in particular to modify Annex III use-cases (Article 7(1) and 7(3) AI Act) and the conditions under which these systems shall not be considered to be high-risk (Article 6(6)–(7) AI Act), as well as to amend or supplement the thresholds, benchmarks and indicators for classifying GPAI models as “systemic risk” (Article 51(3) AI Act), including the criteria set out in Annex XIII for the designation of GPAI models with systemic risk (Article 52(4) AI Act).
Other tools provided by the AI Act are harmonised standards developed by CEN/CENELEC as well as codes of practice for GPAI. As soon as the European Commission approves harmonised standards and/or codes of practice, providers can rely on them to demonstrate compliance with their obligations (Article 40(1), 53(4)(1) and 55(2)(1) AI Act).
All of the instruments – guidelines, delegated and implementing acts of the European Commission, as well as harmonised standards and codes of practice – are powerful tools to clarify, concretise, amend, supplement, modify or even delete a large number of provisions of the AI Act to “take into account the rapid pace of technological development, as well as the potential changes in the use of AI systems” (recital 52 AI Act). Indeed, such measures are necessary to implement the AI Act in accordance with a truly risk-based approach – in line with its ratio legis and the principle of legislative proportionality enshrined in Article 5 TEU.
For this, the following aspects should be taken into account.
3. Risk-benefit analysis and evidence-based high-risk categories
As explained above, the AI Act lacks an explicit, independent risk-benefit analysis and evidence-based (high-)risk categories. On the other hand, the European Commission possesses – not only the authority to issue guidelines for the classification of AI systems as high-risk under Annex III, but also – the power to amend, modify or remove use-cases for high-risk AI systems in Annex III (Article 7(1) and 7(3) AI Act) and to modify or add new conditions under which Annex III high-risk AI systems shall not be considered to be high-risk according to Article 6(3) AI Act (Article 6(6)–(7) AI Act).
Here, the Commission should take into account both the potential harm that AI systems may cause, and their economic and social benefits. To a certain extent, the Commission already has to assess the beneficial effects if it wants to amend Annex III by adding or modifying use-cases of high-risk systems, as Article 7(2)(j) AI Act requires the Commission in these cases to also take into account “the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large.”Footnote 77 However, the benefits should not be one of many factors to be considered under the umbrella of a risk analysis, but an independent criterion to be weighed against the risks.Footnote 78
Moreover, the classification of AI systems as “high-risk” should be based on sufficient empirical evidence. This requirement is also present in the AI Act. Specifically, Article 6(5) AI Act states that the Commission, when adopting delegated acts to add new conditions under which an AI system referred to in Annex III is not to be considered high-risk, shall follow “concrete and reliable evidence.” However, such evidence-based risk analysis should be carried out not only in this context, but generally when defining (or changing) the risk categories, together with a clear methodology, explanation and documentation.
4. Regulation of GPAI models
With respect to GPAI models, the European Commission also has flexible tools at its disposal to address at least some of the concerns raised above.
This applies in particular to the thresholds for classifying GPAI models as “systemic risk,” which can be amended by delegated acts (Article 51(3) and Article 52(4) AI Act). Thus, the Commission can revise the criteria, especially the FLOP threshold which is not based on empirical evidence but rather the result of a political compromise. Article 51(3) AIA even goes beyond the mere update of the FLOPs threshold, by empowering the Commission to also “supplement benchmarks and indicators in light of evolving technological developments, such as algorithmic improvements or increased hardware efficiency, when necessary, for these thresholds to reflect the state of the art.” Accordingly, the Commission has the opportunity to use real-world evidence to set and define the systemic risk threshold by going beyond FLOPs and adding or replacing them with new benchmarks.
In addition, there is a need to develop a common methodology to define systemic risks pursuant to Article 55(1) AI Act and to establish measures as well as benchmarks for assessing and managing these risks pursuant to Article 55(1) AI Act. It is therefore essential that GPAI model providers, standard setting organisations, national authorities, civil society organisations and other relevant stakeholders work together with the AI Office to develop codes of practice in this area, considering international approaches.Footnote 79
This is likely to be a major challenge as the Codes of Practice shall be ready at the latest by 1 May 2025 (Art. 56(9)(1) AI Act) – a very short timeframe for drafting a Code based on very vague rules. Against this background, the focus should be primarily on the risk-based approach – addressing only relevant risks and sticking to the letters of the AI Act.
5. Overly broad AI definition and mandatory requirements for high-risk AI systems
Proper risk-based implementation can also mitigate the overly broad definition of AI and the problem that deterministic software systems used in high-risk sectors are subject to the same mandatory (strict) requirements as unpredictable AI systems based on machine learning. While the definition of AI systems in Article 3 AI Act cannot be changed by the Commission but only be interpreted, many of the requirements in Article 8–15 AI Act are worded broadly enough to be applied in a manner that takes into account the fact that AI systems pose different risks due to their different levels of autonomy and adaptability:
-
For example, the requirements laid down in Article 10 AI Act on data and data governance appear to apply only to AI systems “which make use of techniques involving the training of AI models with data,” thus excluding other AI systems that are not based on machine learning.
-
Moreover, the wording of Article 13(1) AI Act that high-risk AI systems shall be “sufficiently” transparent to enable deployers to interpret the output of a system and use it “appropriately” is open enough to distinguish between different AI systems with different levels of transparency.
-
With respect to human oversight, Article 14(3) AI Act explicitly emphasises that the necessary oversight measures must be “commensurate with the risks, level of autonomy and context of use of the high-risk AI system.”
-
Furthermore, Article 15(1) AI Act speaks only of an “appropriate” level of accuracy, robustness and cybersecurity, whereas Article 15(4) AI Act provides specific rules for high-risk AI systems “that continue to learn.”
Against this background, it seems sufficient that the European Commission issues guidelines on the application of Articles 8 to 15 (Article 96(1)(a) AI Act) to clarify how these articles apply with regard to different AI technologies.
6. Double regulatory burdens and overlap of enforcement structures
Guidelines of the European Commission can also help companies to deal with overlaps, inconsistencies and (potential) contradictions between the AI Act and other EU legislation to avoid double regulatory burdens and over-enforcement. Article 96(1)(e) AI Act requires the Commission to develop guidelines with “detailed information” on the relationship of the Act with other relevant Union law, including with regard to “consistency in their enforcement.”
To this end, the European Commission should conduct an in-depth analysis to identify overlaps and contradictions between the AI Act and other horizontal or sectoral legislation. Based on this research, guidelines could then be rolled out to help clarify the relationship between these laws (i.e. lex specialis, lex generalis, and complementary laws).Footnote 80
Moreover, the European Commission should carry out an assessment of the relationship between the AI Act governance bodies and other governance bodies on EU and Member State level. Guidelines could then clarify whether a particular governance body is the lead authority and how the different bodies should cooperate or relate with each other.Footnote 81
Whereas guidelines could make an important contribution to clarifying the interplay between the AI Act and other EU laws (including the relationship between different enforcement bodies), they can only interpret existing laws but not change them. Yet, many of the discussed issues cannot be solved by interpretation alone, as they are rooted in legal inconsistencies, mostly at EU level. Therefore, only a revision of sector specific EU laws can deal with these issues.
V. Regulating AI outside the EU: lessons from the AI act
Lawmakers around the world are studying the AI Act to determine whether they should follow the European Union’s lead and adopt similar laws to regulate AI systems. This is the so-called “Brussels effect” – a term coined by Anu Bradford, to describe Europe’s global footprint in terms of triggering emulation in other legal systems. In its original formulation, the Brussels effect was seen mainly as a de facto phenomenon where companies voluntarily follow EU rules in standardising a product or service, making their business processes simpler.Footnote 82 However, it can also take a de iure effect where countries outside the EU adopt EU-like regulations.Footnote 83
Over the past years, scholars have wondered whether the AI Act will unleash a new Brussels effect. While some claim this could be the case,Footnote 84 others disagree.Footnote 85 EU policymakers strongly believe in the Brussels effect. From the onset, the AI Act was designed with its extraterritorial effects in mind.Footnote 86
In the following sections, it is argued that the AI Act is unlikely to unfold a de iure Brussels effect (1.). Moreover, considering the lessons the AI Act can teach lawmakers around the world, such an effect would also be undesirable (2.).
1. Why the AI act won’t trigger a Brussels effect
One major obstacle for foreign legislators against simply copying and pasting the AI Act is the complexity of AI as a policy area. Unlike the GDPR – which has indeed served as a model for data protection regulation around the world – Artificial Intelligence does not present a single policy problem (e.g. how to protect the fundamental right to privacy), but rather a set of loosely connected problems – ranging from the protection of health and safety to a variety of fundamental rights. Even when legislators follow the definition of AI – as set out in the AI Act, which is based on the recently updated OECD principles – there is little agreement worldwide on who and what should be regulated: public administration, law enforcement bodies, the judiciary, alternative dispute resolution mechanisms, and/or the (entire) private sector? Autonomous weapons systems? Self-driving vehicles and other cyber-physical machines? Medical devices and expert systems? Social scoring? Employment? Social welfare? Biometric identification/categorisation systems? Credit scoring? Life and health insurance? Algorithmic recommender systems used by platforms? AI-based contracts?
Moreover, there is little international consensus on the how of such a regulation, i.e. how to apply fundamental values such as human dignity and autonomy, fairness, transparency etc., in a given context. While global agreements in recent years – such as UNESCO’s AI Recommendation and the OECD’s AI Principles – recognise such fundamental principles, the ambiguity of these high-level agreements accommodates different political and ethical positions, allowing states to interpret them differently.Footnote 87 As Roberts and others point out: “Take AI fairness, a principle supported by all G20 member states, as applied to facial recognition technology. The implementation of this principle in the EU context involves the proposed banning of these technologies, while in China, ethnic-recognition technologies are permissible in the name of order and social stability.”Footnote 88
Even the adoption of a regulatory technique such as the risk-based approach, which appears to be politically neutral, involves fundamental policy choices: Does a country want to eliminate/ban (certain) risks, reduce them to an acceptable level, or strike a balance between risk reduction and the costs of regulation? If it is the latter: How does it balance risks and benefits in a specific sector and/or use-case?
Another major obstacle for foreign legislators to simply follow the EU’s approach, is that the AI Act does not establish a comprehensive legal framework that can be adopted tel quel. Instead, it interacts in a very complex way with a rather sophisticated system of existing EU laws. In particular, the AI Act both complements existing product safety legislationFootnote 89 and builds, at the same time, on this legislation for the purpose of risk classification.Footnote 90 Moreover, the AI Act complements existing EU non-discrimination law with specific requirements aimed at minimising the risk of algorithmic discrimination. The AI Act further complements EU data protection law. This means that any processing of personal data by an AI system must comply with EU data protection law (e.g. the GDPR) and the AI Act. Last but not least, the Act provides for a number of future-proof instruments that will complement the Regulation, such as delegated and implemented acts, codes of practice and harmonised standards. As a result, it would not make sense to adopt the AI Act in isolation, as such a piece of legislation would be neither comprehensible nor meaningful without the rich body of existing EU law.
For all these reasons, it is unlikely that the AI Act will become the new global standard.
2. Why the AI act shouldn’t trigger a Brussels effect
There are also important reasons as to why a de iure Brussels effect of the AI Act is not desirable.
First, it is noteworthy that the economic, social, legal and political situation of countries are very different and, as a consequence, how countries and citizens are affected by AI.Footnote 91 Second, AI-specific regulation is still in its early stages, and it is unclear what the social and economic consequences of the AI Act will be. If it turns out that the AI Act has unforeseen significant negative effects, these will be duplicated around the world, with a Brussels effect.Footnote 92 Third, the existence of different AI regulations in different countries can – under the right conditions – stimulate experimentation and innovation in regulation through trial and error.Footnote 93
Fourth, from a fundamental rights perspective, scholars rightly point out that the AI Act is the product of constitutional constraints.Footnote 94 Since the EU has no general competence to harmonise fundamental rights in the Member States, it relies instead on the competence to promote the internal market (Article 114 TFEU). As a result, the AI Act does not follow a fundamental rights approach, but instead uses product safety law and risk regulation. However, such an approach is ill-suited to protect human rights. Given that most countries (or even regions) are not subject to the same competence constraints as the EU, they can (and should) use other approaches to address fundamental rights issues in relation to AI.
Finally, legislators around the world should take into account that key provisions of the AI Act do not follow a truly risk-based approach, particularly with respect to proper risk-benefit analysis, limited reliance on empirical evidence, pre-defined, closed risk categories, systemic risks of GPAI models, the overly broad definition of AI, double regulatory burdens, and overlapping enforcement structures. It is also and above all for these reasons that regulators outside the EU should not blindly follow the EU approach.