Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-25T07:08:14.153Z Has data issue: false hasContentIssue false

15 - The EU’s Artificial Intelligence Laboratory and Fundamental Rights

from Part IV - Testing the Remedies System

Published online by Cambridge University Press:  21 December 2024

Melanie Fink
Affiliation:
Leiden University

Summary

This contribution examines the possibilities for individuals to access remedies against potential violations of their fundamental rights by EU actors, specifically EU agencies’ deployment of artificial intelligence (AI). Presenting the intricate landscape of the EU’s border surveillance, the chapter sheds light on the prominent role of Frontex in developing and managing AI systems, including automated risk assessments and drone-based aerial surveillance. These two examples are used to illustrate how the EU?s AI-powered conduct endangers fundamental rights protected under the EU Charter of Fundamental Rights. Risks emerge for privacy and data protection rights, non-discrimination, and other substantive rights, such as the right to asylum. In light of these concerns, the chapter then examines the possibilities to access remedies by first considering the impact of AI uses on the procedural rights to good administration and effective judicial protection, before clarifying the emerging remedial system under the AI Act in its interplay with the EU’s existing data protection framework. Lastly, the chapter sketches the evolving role of the European Data Protection Supervisor, pointing out the key areas demanding further clarifications in order to fill the remedial gaps.

Type
Chapter
Information
Redressing Fundamental Rights Violations by the EU
The Promise of the ‘Complete System of Remedies'
, pp. 391 - 421
Publisher: Cambridge University Press
Print publication year: 2024
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

15.1 Introduction

This contribution examines the possibilities for individuals to access remedies against potential violations of their fundamental rights by EU actors, specifically the EU agencies’ deployment of artificial intelligence (AI). Presenting the intricate landscape of the EU’s border surveillance, Section 15.2 sheds light on the prominent role of Frontex in developing and managing AI systems, including automated risk assessments and drone-based aerial surveillance. These two examples are used to illustrate how the EU’s AI-powered conduct endangers fundamental rights protected under the EU Charter of Fundamental Rights (CFR).Footnote 1 These risks emerge for privacy and data protection rights, non-discrimination, and other substantive rights, such as the right to asylum. In light of these concerns, Section 15.3 examines the possibilities to access remedies by first considering the impact of AI uses on the procedural rights to good administration and effective judicial protection, before clarifying the emerging remedial system under the proposed AI ActFootnote 2 in its interplay with the EU’s existing data protection framework. Lastly, the chapter sketches the evolving role of the European Data Protection Supervisor (EDPS) in this context, pointing out the key areas demanding further clarifications in order to fill the remedial gaps (Section 15.4).

15.2 EU Border Surveillance and the Risks to Fundamental Rights

As European integration deepens, the need for enhanced security measures has led to modernising the EU’s information systems and other border surveillance capabilities, increasingly involving tools that can be classified as AI systems. The latter refers to ‘a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.’.Footnote 3 Among the AI tools explored for use in the EU’s border surveillance are tools to support the detection of forged travel documents and automated pre-processing of long-stay and residence permit applications for the Schengen Area, as well as the use of AI for risk assessments by way of identification of irregular travelling patterns, high-security risks, or epidemic risks. The design, testing, deployment, and evaluation of these systems is principally entrusted to eu-LISA – the agency responsible for the operational management of the systems in the area of freedom, security, and justice.Footnote 4 This task comes with a not-unimportant caveat that the design of the AI tools to be used in border control is delegated – with expensive tenders – to private developers.Footnote 5 For instance, a €300 million contract was agreed in 2020 with Idemia and Sopra Steria for the implementation of the new, sensitive data processing Biometric Matching System (BMS).Footnote 6 Similarly, EU agencies, such as the EU’s Border and Coast Guard Agency – Frontex,Footnote 7 invest heavily in developing AI-powered border surveillance systems, including aerial and other hardware tools.Footnote 8

Frontex is among the key EU actors whose tasks and powers are pointedly enhanced by the use of AI systems. Several factors are driving the increasing interest in using AI in the EU’s border security context. These include the need to process large amounts of data, the longing for cost and resource efficiency, coupled with the decreasing costs of data storage and processing power, the political democratisation of AI technology, and the resulting influence of EU initiatives embracing the development and deployment of AI.Footnote 9 To illustrate the risks that AI uses pose to fundamental rights, the following discussion zooms in on the Frontex’s AI-powered conduct (Section 15.2.1) before depicting the risks posed by these uses to fundamental rights (Section 15.2.2).

15.2.1 Frontex at the Forefront of Border Surveillance

Frontex is rapidly expanding its AI capabilities. Among those currently explored are AI tools for automated border control (e.g., e-gate hardware), document scanning tools, facial recognition and other biometric verification tools, maritime domain awareness capabilities, unmanned surveillance tools (i.e., ‘towers’ planted in border regions to detect illegal border crossings), and other forms of unmanned autonomous aerial surveillance systems.Footnote 10 Two examples deserve closer inspection to illustrate how the AI uses by EU actors give rise to fundamental rights violations: automated risk assessments and AI-powered aerial surveillance.

15.2.1.1 Automated Risk Assessments

Automated risk assessment (ARA) refers to a process of identifying potential risks by using computer systems, algorithms, or data analysis techniques to evaluate risks in a given context. The ARA relies on extensive datasets that are widely available in the digital age.Footnote 11 Increasing reliance on automated risk assessments in the EU’s border security is not new. It emerges from a long-standing practice of informational cooperation in the EU’s area freedom, security, and justice based on large-scale automated matching of personal data.Footnote 12 Among others, the exchange of detailed alert files occurs among the national competent authorities and the EU agencies via the Schengen Information System (SIS), the Visa Information System (VIS), and Eurodac, and soon also the Entry/Exit System (EES), the EU Travel Information and Authorisation System (ETIAS), and the European Criminal Records Information System (ECRIS-TCN).Footnote 13 In addition, automated exchanges of personal data take place among the national authorities, EU agencies, and third parties, such as airline companies or online communication services, under specially set up frameworks, such as the PNR scheme.Footnote 14 These information exchange frameworks provide for automated assessments of gathered information in order to identify and locate potential security threats. The identification may rely on matches between the alerts containing purely alphanumeric data concerning a specific individual. Generally, however, the data collected within the alerts also include sensitive and genetic data,Footnote 15 such as DNA, fingerprints, or facial images, enabling advanced identification based on pre-defined algorithms embodying the characteristics of AI tools.Footnote 16

EU agencies, including Frontex, employ a range of automated risk assessment tools in the performance of their tasks. Specifically, Frontex will host the ETIAS Central Unit, managing the automated risk analyses in the ETIAS – the European Travel Information and Authorisation System.Footnote 17 From mid-2025, the system will undertake pre-screening of about 1.4 billion people from sixty visa-exempt countries for their travel to the Schengen states.Footnote 18 The pre-screening aims to contribute to a high level of security, prevent illegal immigration, prevent, detect, and investigate terrorist offences or other serious crimes, as well as protect public health.Footnote 19 Beyond the ETIAS Central Unit hosted by Frontex, the ETIAS will operate on the National Units of the thirty European countries and the system itself, which is developed and maintained by eu-LISA.Footnote 20 The fast processing of future travel applications will be guaranteed by an automated risk assessment performed by this ETIAS Central System.Footnote 21

The risk assessment will entail a threefold comparison of travel application data. First, the Central System automatically compares the information submitted by the travel applicant against the alerts stored within the above-mentioned EU information systems, namely the SIS, VIS, Eurodac, and EES, as well as against Europol data and Interpol databases.Footnote 22 Second, the traveller’s application will be compared against a set of risk criteria pre-determined by the ETIAS Central Unit – that is, Frontex.Footnote 23 Lastly, the comparisons will be done against the ETIAS ‘Watchlist’ of persons suspected of involvement in terrorist offences or other serious crimes.Footnote 24 While the first category of ARA in the ETIAS process places the responsibility on the Member States (as primarily responsible for entering alerts into the EU large-scale databases), the latter two categories also directly involve EU agencies, namely Frontex and Europol (due to their role in setting up the ARA criteria or the ‘watchlist’). Given the focus on AI uses by EU actors in this chapter, only the Frontex-defined risk criteria encompassed within the pre-screening algorithm will be further discussed.

The Frontex-operated Central Unit should construe the risk criteria on the basis of risks identified by the EU Commission in corresponding implementing acts. The latter could be drawn from the EES and ETIAS statistics on abnormal rates of overstaying and refusals of entry for a specific group of travellers due to a security, illegal immigration, or high epidemic risk based on the information provided by Member States as well as by the WHO.Footnote 25 Based on this information, the ETIAS Central Unit will define the final screening rules underlying the ETIAS Central System’s algorithm.Footnote 26 Pursuant to Article 33(1) ETIAS Regulation, ‘these screening rules shall be an algorithm enabling profiling’ based on a comparison of the application data with specific risk indicators.

The algorithm will be built on a combination of data concerning the age range, sex, nationality, country and city of residence, level of education (primary, secondary, higher, or none), and current occupation.Footnote 27 These data will serve to evaluate a person’s behaviour, location, or movements based on a detailed history of one’s travels, submitted in the ETIAS application form. This type of practice thus corresponds to the practice of profiling, which, pursuant to the EU data protection rules,Footnote 28 should be prohibited, unless accompanied by strict safeguards.Footnote 29 Pursuant to the jurisprudence of the Court of Justice of the European Union (CJEU), the safeguards must ensure that the criteria used for profiling are targeted, proportionate, specific, and regularly reviewed, as well as not be based solely on the protected categories of age, sex, and others.Footnote 30 The ETIAS algorithm may however be targeting specific country of origin or nationality, which can give rise to concerns of discrimination, as discussed further below. In this respect, it is worth highlighting that ETIAS automated risk assessments will serve to select a rather small group of potential security threats from an ocean of otherwise innocent, law-abiding citizens. As the ETIAS explanatory website states, it is expected that about 97% of applications will be automatically approved. It is expected that the remaining 3% will require further manual verification by the ETIAS Central Unit in cooperation with the National Units.Footnote 31

Every refusal of travel authorisation in ETIAS will have to be notified to the applicant, explaining the reasons for the decision.Footnote 32 The notice email should include information on how the applicant may appeal this decision and details of the competent authorities and the relevant time limits.Footnote 33 The appeals will be handled by the Member State refusing the entry and hence in accordance with that state’s national law.Footnote 34 Individuals without an ETIAS authorisation will be refused boarding at international airports or will be stopped when trying to cross Schengen’s external borders by land. Accordingly, it is of the utmost importance that the system’s AI component embodied within the algorithmic risk assessments does not lead to disproportionate interferences with individuals’ fundamental rights, including the rights to privacy, data protection, and protection from discrimination. Equally, the ETIAS National Unit authorities must be sufficiently trained and equipped to ensure that refusal decisions are not based solely on the automated hit in the system.Footnote 35

15.2.1.2 Aerial Surveillance

In another vein, Frontex employs AI tools to improve situational awareness and early response in pre-frontier areas. This activity is essentially facilitated through the European Border Surveillance System (EUROSUR).Footnote 36 The system is a crucial information resource enabling Frontex to establish situational pictures of the land, sea, and air to identify potential illegal crossings and vessels in distress.Footnote 37 The system contains information collected through aerial (including unmanned drones) surveillance, automated vessel tracking and detection capabilities, software functionalities allowing complex calculations for detecting anomalies and predicting vessel positions, as well as precise weather and oceanographic forecasts enabled by the so-called EUROSUR fusion services deployed by Frontex.Footnote 38 With the help of the most advanced technology, Frontex is thus responsible for establishing the ‘European situational pictures’ and ‘specific situational pictures’ aimed at assisting the national coast guards of the EU and EU-associated states in the performance of border tasks.Footnote 39

The collection of information to be shared via EUROSUR increasingly relies on AI tools. Notably, in recent years, Frontex has significantly expanded its aerial surveillance arsenal.Footnote 40 This expansion required significant investments in advanced technology developed by private companies.Footnote 41 AI-powered drones or satellites enabling monitoring of the situation on land or sea do not directly pose risks to fundamental rights. However, reliance on such AI-powered surveillance tools gives the EU’s border authorities unmatched knowledge about the border situation, permitting the authorities to take actions that may put certain fundamental rights at risk, such as the right to asylum.

Furthermore, as the EU Fundamental Rights Agency states, ongoing development of these technologies and the sharing of the gathered intelligence through EUROSUR is likely to employ algorithms used to track suspicious vessels or extend to the processing of photographs and videos of ships with migrants by maritime surveillance aircraft.Footnote 42 In other words, the AI-powered information exchange will also directly implicate privacy and data protection rights. Therefore, the Frontex AI-powered border surveillance tools must also be subject to close legal scrutiny by independent supervisory authorities and potentially courts when risks to fundamental rights materialise.

The two examples of AI-powered information exchange frameworks examined here facilitate distinct types of border control conduct. On the one hand, the ETIAS automated risk assessments support decision-making by national authorities on whether or not to let someone into the Schengen area.Footnote 43 On the other hand, EUROSUR, accompanied by AI-powered land, sea, and air surveillance equipment, create detailed situational pictures with clear instructions for actions to be taken in the context of joint operations between the Frontex teams and national border guard authorities concerning identified vessels carrying individuals, primarily refugees in need of international protection. The two examples pose distinct risks to the fundamental rights of the individuals concerned.

15.2.2 The Diverse Nature of the Risks to Fundamental Rights

EU law requires that any use of AI, including by EU actors, must comply with fundamental rights enshrined in the EU Charter of Fundamental Rights and protected as general principles of EU law, irrespective of the area of AI use concerned.Footnote 44 This emerges from the requirements of the Union as a legal order based on the rule of law, which, under Article 2 TEU, declares, among others, respect for human dignity and human rights, including the rights of persons belonging to minorities.Footnote 45 With rapid technological progress, the use of AI as a system technologyFootnote 46 brings about an ever greater potential for misuse, which broadly impacts human dignity and various fundamental rights deeply connected to the inviolability of a human being.Footnote 47 This concern is broadly acknowledged within the international community and the EU,Footnote 48 asserting that protection of human values, including fundamental freedoms, equality, fairness, the rule of law, social justice, data protection, and privacy, shall remain at the centre of placing AI into use in modern democratic societies.

Preserving human dignity in the age of AI requires that individuals retain control over their lives, including when and how they are being subjected, without knowledge or informed consent, to the use of AI. Putting humans and human dignity at the centre of the use of AI is necessary to ensure full respect for fundamental rights. It should thus be the starting point in every discussion on the development, deployment, and reliance on AI where human lives are at stake. However, as the Court of Justice repeats, fundamental rights ‘do not constitute unfettered prerogatives’.Footnote 49 They must be viewed in light of their function within society, and, if necessary, they may be limited as long as any interferences with the rights are duly justified.Footnote 50 Accordingly, the deployment of and reliance on AI shall be reviewed with the same set of considerations in mind: it must be legally authorised, respect the essence of specific rights, and be proportionate and necessary under the objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others (Article 52(1) CFR).

The examples of AI use by Frontex examined above exhibit the breadth of cross-cutting fundamental rights concerns occurring in AI-powered border surveillance. Three key concerns can be highlighted: the risks to privacy and data protection (Articles 7 and 8 CFR), discrimination (Article 21 CFR), and risks to other substantive rights, such as the right to asylum (Article 18 CFR).

15.2.2.1 Privacy and Data Protection

Given that the functionality of AI relies on the wide availability of (personal) data, the discussions on the use of AI tend to revolve around the rights of privacy and personal data protection, enshrined in Articles 7 and 8 of the Charter. Although deeply interconnected, these rights are separate, embodying the more traditional right to privacy and the modern right to data protection.Footnote 51 They share a common goal of safeguarding individual autonomy and dignity by providing a personal space to freely develop their identities and thoughts, thus laying the foundation for exercising other fundamental rights, such as freedom of thought, expression, information, and association.Footnote 52

Privacy and data protection will generally be implicated in the examined uses of AI in border surveillance. On the one hand, data protection concerns arise extensively in the context of the large-scale processing of personal data for automated risk assessments. Far beyond the scope of this chapter to examine all,Footnote 53 two risks are particularly worth mentioning. ARAs, such as those envisioned under the ETIAS, risk circumventing fundamental data protection principles, especially the purpose limitation and the related requirements of necessity and proportionality, as well as the prohibition on profiling, including based on discriminatory grounds.Footnote 54 As explained above, the AI-powered ETIAS assessments will be based on a threefold comparison of personal data, including sensitive data, against existing EU databases, against risk criteria pre-defined by the ETIAS Central Unit operated by Frontex, and against the ETIAS Watchlist.

The EU systems’ interoperability will facilitate the comparisons against the EU large-scale databases.Footnote 55 Effectively, interoperabilityFootnote 56 will transform the border surveillance architecture by enabling far-reaching linking of personal information stored in silo-based alerts.Footnote 57 The interlinking of databases will blur the boundaries between law enforcement and intelligence services and between the tasks of the EU and national law enforcement and migration authorities, undermining data protection safeguards.Footnote 58 Specifically, in the ETIAS authorisation process, the purpose limitation principle as a critical data protection safeguard seems to disappear completely, for instance, due to the requirement that the ETIAS Central Unit, hosted by Frontex, shall have access to ‘any linked application files, as well as to all the hits triggered during automated processing’.Footnote 59

Furthermore, the comparison against screening criteria defined by Frontex will employ algorithms to evaluate the risk factor of a specific individual, akin to a practice of profiling, to facilitate decisions about individuals’ lives. According to Article 22(3) of the GDPR, such automated decisions should, in principle, be prohibited unless accompanied by sufficient safeguards, including meaningful human intervention.Footnote 60 Since ETIAS assessments will lead to automatic authorisationsFootnote 61 and quasi-automated refusals of entry,Footnote 62 these decisions will have significant consequences for individuals. Automated risk assessments will not only interfere with the data protection right but also pose a distinct threat of discrimination while making access to remedies ever more difficult, as discussed below.

On the other hand, privacy concerns will feature, for instance, wherever surveillance measures are employed in public places. Aerial surveillance, such as with the help of aircraft or drones that record the situational pictures on the land or seas, is increasingly being used by Frontex and can interfere with individuals’ privacy by closely monitoring their location, behaviour, movements, and other aspects of personal activities without their knowledge or consent. The use of aerial surveillance technologies allows for gathering visual and sometimes audio information from above, which can capture private moments and sensitive information.Footnote 63 This intrusion can violate individuals’ right to privacy and data protection and potentially expose vulnerable persons to unwarranted conduct by surveillance authorities. It is of the utmost importance that whenever such technologies evolve to increasingly sophisticated people-monitoring tools, their deployment is limited to their original purposes with strict legal safeguards in place and effective opportunities to seek redress in case of misconduct.

15.2.2.2 Risk of Discrimination

Article 21 of the Charter guarantees individuals protection against any form of discrimination based on the protected grounds, such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, and others. Discrimination concerns are fundamental to the discussions on subjecting human lives to the uses of artificial intelligence. This is because the very purpose of any computational analysis through algorithmic data processing is to evaluate, categorise, or otherwise discover patterns in the analysed data, including personal data. However, human or machine bias may affect the algorithmic output in many ways.Footnote 64

Risk assessment systems, such as the ETIAS, rely on an algorithmic model, which processes large amounts of personal data to make decisions about individual lives. Pursuant to Article 14 of the ETIAS Regulation, any processing of personal data ‘shall not result in discrimination against third-country nationals’. However, these assessment algorithms are designed and trained on personal data, which includes the protected grounds under the right to non-discrimination, such as sex, age, place of birth, or nationality.Footnote 65 Therefore, to guarantee the right to non-discrimination, the criteria used to train the ETIAS algorithm to evaluate a certain risk behaviour of a specific individual need to be carefully designed to avoid perpetuating or even amplifying existing societal biases.Footnote 66 Indeed, discriminatory misconduct has been found to occur in other administrative contexts, such as in the infamous welfare allocation scandal in the Netherlands.Footnote 67 The data quality and lawfulness of the data stored in the EU large-scale systems used for ETIAS comparisons and the design of the risk criteria to be used in the ETIAS ARA-based authorisations must cautiously balance the non-discrimination requirements of EU data protection rules with the requirements of the Charter right. Namely, the algorithm must be built to ensure that any AI-driven decision-making is not based solely on special categories of data, reflecting the protective grounds under Article 21 of the Charter.Footnote 68 The safeguards, including meaningful transparency for the manual human review following an automated match, must be effective in practice. This includes effective enforcement of compliance with these safeguards, given that such AI tools are powerful in nudging the national competent authorities to decide in a certain way, which may lead to other violations.Footnote 69

Recently, the Court of Justice enumerated the essential guidelines for designing the risk criteria for algorithmic assessments in the security context. In the Ligue des droits humains judgment,Footnote 70 the Court interpreted the EU’s PNR Scheme as requiring that any comparison of passengers’ name records against pre-determined risk criteria demands that such criteria are defined in a way that keeps incorrect identifications to a minimum.Footnote 71 To achieve this aim, any match must be individually reviewed by non-automated means to highlight any false positives and identify discriminatory results.Footnote 72 Furthermore, such review will be effective only where it is clearly established as a requirement in the rules of conduct in the specific context, is well documented, and the officials are sufficiently trained, including to ‘give preference to the result of the individual review conducted by non-automated means’.Footnote 73

This requirement of manual review is especially crucial since confronting direct or indirect discriminatory effects in AI-driven decision-making in legal proceedings is rather difficult for the affected individuals.Footnote 74 Indeed, in this respect, the Court also demands that the affected individuals are informed about the pre-determined assessment criteria so as to enable them to understand and defend themselves,Footnote 75 as discussed in the next section.

15.2.2.3 Risks to Other Substantive Rights

Beyond the rights to privacy, data protection, and non-discrimination, the AI uses in border surveillance might directly or indirectly implicate other substantive fundamental rights. For instance, AI-powered border surveillance may lead to detention of individuals presenting themselves at the land borders without a valid ETIAS authorisation in interference with their liberty and security (Article 6 CFR). In another vein, AI-powered aerial surveillance enabling identification of migrants on the sea might lead to wrongful actions being taken by the Frontex-led operations, possibly leading to violations of individuals’ right to life (Article 2 CFR). Recent investigations by human rights organisations revealed evidence that information gathered from Frontex-operated aerial surveillance has been utilised in facilitating illegal pushbacks of refugees that may contract their right to asylum (Article 18 CFR).Footnote 76

In conjunction with the use of drones, EU Member States have engaged in cooperative agreements with southern Mediterranean countries, such as Libya and Turkey, to intercept and return migrants, thereby externalising the responsibility for these actions.Footnote 77 This approach prevents other vessels from intervening or disembarking rescued individuals in supposedly safe harbours. EU Member States have justified these measures by claiming that search and rescue activities act as a ‘pull factor’ for migrants coming to EU countries. Frontex has often been viewed as a passive bystander in this context, given the division of responsibilities in the EU’s integrated border management (EIBM).Footnote 78 Under the EIBM, the final responsibility still lies with the Member States. Lately, this division has been criticised as it transpired from a classified EU reportFootnote 79 that Frontex knowingly contributed to illegal pushback practices.Footnote 80 These practices violate the right to asylum under Article 18 CFR and the cornerstone of international human rights law – the principle of non-refoulment.Footnote 81

And, as already mentioned above, with the continuing development of aerial surveillance, new risks to fundamental rights will emerge. These risks might arise from the processing of photographs and videos of vessels with migrants on board as well as the potential implications of the algorithms that will be used to track the vessels flagged as suspicious. All these types of AI-powered capacities of the EU’s Frontex-led border surveillance will expand the above risks to privacy, data protection, and discrimination and may continue to indirectly support unlawful practices, such as decisions on whether or not to save the lives of individuals in distress on the seas and those in need of international protection.Footnote 82

15.3 Exploring the Possibilities for Access to Remedies

In the EU legal order, when a person considers that the EU actors have violated their rights, they have the right to seek an effective remedy (Article 47 CFR). The use of AI, however, brings considerable challenges to ensuring that AI-powered conduct is both non-arbitrary and sufficiently reviewable to fulfil the requirements of this constitutional guarantee, which constitutes ‘the essence of the rule of law’.Footnote 83 To assess the properties of the EU remedial architecture, it is therefore necessary to also consider the interrelated impacts of the AI on the exercise of procedural requirements under the rights to good administration and effective judicial protection (Section 15.3.1). The discussion then turns to the construction of remedies based on the scope and interplay of the upcoming AI Act with the EU’s existing data protection framework (Section 15.3.2).

15.3.1 The Impact of AI Use on Individuals’ Access to Remedies

Article 41 CFR guarantees to everyone the right to good administration in decisions or other legal acts adopted by EU actors. Historically, the CJEU interpreted this right as a general principle of EU law,Footnote 84 which expanded its application wherever EU law applies. Under its umbrella,Footnote 85 the right to good administration enshrines rights and obligations, which hold at their core the enabling role for legal accountability in public conduct. On the one hand, the right demands that the authorities act fairly, impartially, and within a reasonable time. On the other hand, it obliges the authorities to present sufficient reasons substantiating their acts vis-à-vis the affected persons. In TUM, the Court formulated the interplay of good administration requirements as ‘the duty of the competent institution to examine carefully and impartially all the relevant aspects of the individual case’ prior to decision-making.Footnote 86 In Nölle, the Court further recognised this duty of care as an individual right arising from the clear, precise, and unconditional obligation in Article 41 CFR.Footnote 87 The authorities’ compliance with their duty of care obligations ensures that the affected person understands the evidentiary basis of the decision in order to decide whether or not to seek remedies against it. As an essential procedural requirement,Footnote 88 failure to comply with the duty of care obligations may lead to the annulment of the decision.Footnote 89

It is in its defence-enabling function that the right to good administration also becomes central to remedial possibilities against the AI-driven EU conduct. In this context, compliance with the good administration requirements faces significant obstacles. The opacity of algorithmic risk assessments, exemplified in the ETIAS authorisation process, poses substantial challenges to the authorities’ ability to reason their decisions and ensure that these are based on factually correct, relevant, and complete information. As explained above, any of the 1.4 billion visa-exempt citizens that apply through the ETIAS website will be automatically screened for any suspicion of posing serious threat to public security. This suspicion will be found to exist whenever an automated processing of the traveller’s application results in a hit against pre-determined risk criteria in conjunction with an automatic comparison with millions of alerts stored in other EU information systems. If the process results in a hit, the competent authorities will have to manually review the data, ensuring the possibility to contradict the automated result, in view of their duty of care obligations. This requirement of human intervention ensures that each rejection of travel authorisation is not a decision based solely on automated processing of personal data (Article 22(3) GDPR).

However, to what extent will the manual review verify the correctness, relevance, and completeness of the information so as to uncover whether or not the hit was, for instance, due to discriminatory profiling by the pre-screening algorithm? This question does not permit an easy answer,Footnote 90 especially considering the context in which the manual reviews will take place: namely, time-pressured (the ETIAS rules estimate a response within a few days where manual verification is required), without sufficient AI expertise of the officials, and facing other constraints, such as well-documented automation and confirmation biases in manual reviews,Footnote 91 coupled with a limited access to the training data underpinning the risk assessment algorithm.

The diminished potential to meet the requirements of good administration in the AI-powered decision-making will have direct implications for the individuals’ access to effective remedies. In fact, in its jurisprudence, the CJEU often equates the requirements of reasoning under the right to good administration to the requirements of an effective remedy under Article 47 CFR.Footnote 92 Their interplay, according to the Court, lies in enabling the person to ascertain the reasons upon which the decision is based, ‘so as to make it possible for him or her to defend his or her rights in the best possible conditions’.Footnote 93 Individuals will only be able to defend themselves when it is indeed possible to understand the relevant decision and the process under which it was taken. Additionally, the Court recognises the significance of reasoning for ability of judges and other supervisory authorities to exercise effective review. Indeed, these concerns are reflected in the regulation of AI – the AI Act, which brings about specific transparency requirements intended to facilitate the AI users’ ability to act with a meaningful human control over AI-generated outputs.

Before turning to the AI Act, it remains to be stressed that judicial remedies are rather limited in the context of AI-powered conduct based on composite administrative procedures involving actors at EU and Member State levels.Footnote 94 The courts’ jurisdiction to review AI-driven decision-making is territorially limited and constrained by the narrow notion of what constitutes a reviewable act.Footnote 95 On the one hand, the former prevents individuals from challenging the conduct of the EU actors directly before the Court of Justice when the responsibility lies with the national authorities, such as in case of refusals to ETIAS applications or illegal pushbacks of migrants.Footnote 96 The staff of the competent ETIAS National Unit will need to manually review the automated refusals, hence exercise final discretion.Footnote 97 Also, where an ETIAS ARA results in a hit with the information entered by Europol, the ETIAS Regulation only establishes a consultation procedure between the Europol and the responsible Member State.Footnote 98 Under this procedure, Europol must provide the responsible Member State with a ‘reasoned opinion on the application’.Footnote 99 Nonetheless, the final decision – hence final discretion – lies with the Member State concerned. Accordingly, complaints against potential discriminatory effects of the ETIAS algorithm in these circumstances might thus only be raised before the courts of that State, without the possibility of uncovering the factual basis of the information supplied by Europol and relied on in the refusal decision.

On the other hand, these potential discriminatory effects of the underlying risk criteria might not be deemed to produce legal effects to trigger justiciability of the ARA. Indeed, pursuant to the requirement of ‘direct and individual concern,’ the impact of the discriminatory effect might not occur for each person whose application had been refused by the ETIAS ARA.Footnote 100 Accordingly, as argued elsewhere,Footnote 101 construing reviewability in a similar context needs to reflect the underlying automation and output biases if we expect EU law to guarantee sufficient legal protection to the affected individuals. Yet, for now, EU law does not recognise the impact of the screening algorithm on the final decisions taken by the Member States.Footnote 102 As such, for now, there seems to be no possibility of direct judicial remedy against the Frontex-based ETIAS Central Unit for its role in the development and deployment of the ETIAS risk criteria algorithm.

15.3.2 Unwrapping the Remedial Possibilities under the Upcoming AI Act

In 2018, EU legislators embarked on the process of designing specific rules governing develoment and use of artificial intelligence systems in the EU, that would ensure, inter alia, full respect for fundamental rights.Footnote 103 The efforts culminated in the proposal for a horizontal regulation of AI – the AI Act.Footnote 104 Central to this is the EU’s self-proclaimed ‘human-centric approach to AI’, which shall ‘strive to ensure that human values are central to the way in which AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights’.Footnote 105

That the human-centric Act will mitigate the above-identified risks to fundamental rights should, however, not be taken for granted. With the legal basis set in Article 114 TFEU,Footnote 106 the AI Act will first and foremost ensure safe operation of AI systems on the EU’s internal market (Article 1). The Act’s pledge to guarantee respect for fundamental rights is arguably manifested in its ‘risk-based approach’. On the most basic level, the Act differentiates between ‘prohibitions of certain [AI] practices’ and ‘specific requirements for high-risk AI systems’ (Article 1(2)). The former include systems using subliminal techniques beyond a person’s consciousness (manipulation), uses of AI that may exploit vulnerable individuals (Article 5 (a) and (b)), or the real-time biometric identification systems in publicly accessible spaces for law enforcement purposes, except where duly authorised (Article 5(c)).Footnote 107 According to the latest text, the list of high-risk systems in Annex III should be amended to consider the impact of the AI use on the relevant action or a decision to be taken.Footnote 108 Ultimately, the concept of risk to fundamental rights underpinning the AI Act’s approach is defined as ‘the combination of the probability of an occurrence of harm and the severity of that harm’.Footnote 109

Although far from settled, the final elaboration of the prohibited and high-risk AI uses will determine the extent to which the AI Act can provide some form of fundamental rights protection also in the context of AI uses by the EU actors. Yet, as we await clarity on the AI Act’s final shape, one aspect is clear: the AI Act will need to be applied in conjunction with the existing EU law, including the rules on remedies and existing data protection rules, wherever the system relies on, among others, processing of personal data. Accordingly, the following discussion highlights two key aspects, which will be determinative for the protection of fundamental rights from the risks posed by the EU’s AI uses: first, the scope of application of the AI Act to the EU actors’ use of AI and, second, the interplay and main discrepancies between the AI Act’s substantive rules and the data protection rules.

15.3.2.1 The Scope of Application of the AI Act to Border Surveillance

The Act emerges as the EU’s effort to establish general rules on the development, authorisation, and deployment of AI systems in the Union. Accordingly, its provisions will need to be complied with in their entirety. Pursuant to Article 2 of the AI Act, the rules apply to both ‘providers’ placing on the market or putting into service AI systems, irrespective of their location, and ‘deployers’ of AI systems with the establishment in the Union.Footnote 110 The AI Act will also apply to EU actors when acting as a provider or deployer of an AI system. There are some exceptions, however. For instance, the AI Act will not apply to AI systems developed or used exclusively for military purposes or for purely research purposes.Footnote 111

More worryingly, the initial Commission Proposal excluded from its scope the AI systems that are ‘components of the large-scale IT systems’, such as the SIS, EES, or ETIAS, before the entry into force of this regulation.Footnote 112 This has been revised to require that these systems comply with the AI Act by 31 December 2030.Footnote 113 Nevertheless, both solutions leave out the expansive use of AI systems in the EU border surveillance without immediate rules that could address the above-identified risks of AI uses. As the EDPS and the European Data Protection Board (EDPB) highlighted in their joint opinion, such exclusion ‘risks circumventing the safeguards enshrined in the AI Act’.Footnote 114 It also undermines the broader exercise of powers by the competent supervisory authorities, such as the EDPS, when presented with complaints regarding AI uses and claims of violations of the data protection rules.

15.3.2.2 The Interplay between the AI Act and the EU Data Protection Rules

The new safeguards introduced under the AI Act might only contribute to enhanced fundamental rights protection if crafted carefully on the basis of their interaction with the existing data protection rules is properly considered.Footnote 115 However, a number of discrepancies appear between the two legal frameworks, which may pose difficulties for the EDPS that acts as the first instance avenue for addressing potential fundamental rights violations by EU actors. Three aspects of this interplay specifically affect individuals’ access to remedies.

First, the EU data protection framework is far from homogeneous. The framework essentially consists of the GDPR, Regulation (EU) 2018/1725 (the ‘EU DPR’) governing the processing of personal data by Union institutions, bodies, offices, and agencies, and the so-called Law Enforcement Directive (EU) 2016/680 (the ‘LED’) governing the processing of personal data by national law enforcement authorities.Footnote 116 While it is the EU DPR that governs the use of AI by the EU actors examined in this chapter, we also see that the final legal responsibility in the EU’s integrated border control rests with the national border authorities or law enforcement authorities.Footnote 117 The data processing activities of the EU agencies, such as Frontex, are furthermore governed by the agencies’ own founding regulations. These specialised legal instruments thus embody both the exceptions to the EU DPR and the lex specialis rules of LED.Footnote 118 Accordingly, while the EU DPR/in conjunction with the Frontex and ETIAS Regulations will apply to the AI-powered ETIAS system and the development of its algorithm by eu-LISA and Frontex in their distinct capacities, the LED and/or GDPR will govern the ETIAS searches and reliance on the generated output by the national border and law enforcement authorities.

Second, this fragmentation is problematic for access to remedies in view of against potential violations of fundamental rights in AI-driven conduct of the EU actors, considering the remedial system under the EU data protection framework. The latter is essentially a twofold system. Individuals may lodge a complaint with an independent supervisory authority of the Member State/EDPS.Footnote 119 Furthermore, affected individuals enjoy the right to an effective judicial remedy against a decision of that supervisory authority/EDPS or against a decision of the controller or processor.Footnote 120 The GDPR also provides for the possibility of representative action by civil society organisations on behalf of the data subjects.Footnote 121 Research, however, shows that direct remedies are often not utilised, especially in the security context where collection of personal data within the alerts entered in the EU information systems is rarely known to the data subjects.Footnote 122

Instead, a person that is refused travel authorisation will in most cases be able to appeal the only final refusal decision before the supervisory authority of the refusing Member State. In this context, they will, for instance, be able to file a complaint against the Member State authority for non-compliance with the obligation to manually review the automated hit, pursuant to the requirements of Articles 22 GDPR/11 LED. The Member State authority will, however, lack jurisdiction to review the development and deployment of the ETIAS risk algorithm. Accordingly, the affected person will have to lodge a separate complaint with the EDPS, which is competent to review the acts of the ETIAS Central Unit based in the Frontex agency.

Lastly, in their complaint to the EDPS, the affected person will only be able to invoke their rights as the data subject.Footnote 123 The list of data subjects’ rights develops the substance of the autonomous fundamental right to personal data protection (Article 8 CFR). Via the remedial avenues under EU data protection law, individuals might however be able to bring claims concerning potential violations of other fundamental rights, including, for instance, non-discrimination or the right to an effective remedy. In other words, where the EU data protection rules provides specific safeguards regarding non-discrimination, such as in the context of processing special categories of data,Footnote 124 they integrate many of the Charter rights relevant to the digital context.Footnote 125 Overall, integrating the Charter’s rights in the secondary data subject rights could, however, lead to an inferior legal protection. This is because data protection law guarantees data subjects’ rights with substantial number of exceptions and limitations, which is evident from the long list of exceptions to the general prohibition on processing special categories of personal data in Article 9(2) GDPR. Such a priori exceptions might not be subject to the same proportionality and necessity test as permissible limits to fundamental rights are under Article 52(1) CFR.Footnote 126 Although the Court of Justice does apply a strict review of proportionality and necessity in similar high-risk AI uses as demonstrated in the PNR case, subjecting the safeguards under the PNR to a very strict scrutiny, the same may not be the case for complaints addressed by national supervisory authorities. A strict proportionality review is especially necessary, given that the affected persons might often not have sufficient possibilities to bring claims of violations of the Charter’s rights before the courts, since, as explaine above, enforcement of data-specific rights primarily rests with independent data protection authorities (DPAs). The DPAs’ remedial competence, however, differs substantially from judicial competence. Yet, given their primary role in the digital age, these authorities, increasingly perform quasi-judicial review of claims, which implicate Charter rights, beyond the requirements guaranteed under EU data protection framework.

The last, and key concern arising from the interplay between the AI Act and existing data protection remedies, concerns precisely the designation and cooperation among the variety of supervisory authorities with competences over different parts of an AI-driven conduct that may lead to potential fundamental rights violations. As a product safety regulation, the Commission’s original AI Act Proposal did not include any rights and remedies for the affected persons in relation to the uses of AI systems. Critics found this lacuna highly problematic,Footnote 127 given that the Act’s risk-based approach was envisioned to ensure full respect of fundamental rights and freedoms. Without a right to complain against the AI risks, individuals may be able to subsume their claims under their rights as data subjects. This would, however, prove to be only a partial remedy against the diverse and serious risks posed by the use of AI systems, as demonstrated in this chapter. It is therefore essential that individuals have meaningful access to redress mechanisms. The European Parliament proposed to fill this vacuum with the introduction of Chapter 3a to the AI Act Proposal.Footnote 128 The effort culminated in the addition of Section 4 in the final version of the Act, which provides however only provides a limited consolidation of the calls for enhancing access to justice against the risks of AI. Namely, the remedies under the AI Act are essentially two-fold: (a) a product-related complaint mechanisms before the designated market surveillance authorities; (b) the right to an explanation of individual decision-making when the latter is made on the basis of a high-risk AI output.

Furthermore, the effective enforcement of remedies under the AI Act will be contingent on more substantive discrepancies between the two legal frameworks which are however beyond the scope of this chapter.Footnote 129 For instance, such discrepancies surface with respect to the definitions. The original proposal lacked any recognition of the position of the affected private persons under the AI Act legal framework. Notably, the original notion of the ‘user’ within the AI Act Proposal has been changed to denote the ‘deployer’, meaning any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.Footnote 130 The AI ‘users’, now called ‘deployers’ thus indicate the data controllers or processors in the GDPR sense.Footnote 131 In another vein, discrepancies arise from the formulations of the scope of various corresponding rules within the two legal frameworks. For instance, pursuant to recital 63 of the AI Act Proposal, classification of an AI system as high-risk, and hence permitting its use, does not automatically mean that the use of that system is lawful under ‘other Union law’, including data protection rules. The Proposal further clarifies that ‘any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law’. Yet the AI Act provides its own legal basis, for instance, for the processing of special categories of personal data, which shall however not contradict the general prohibition under the data protection rules.Footnote 132 Assuming that these discrepancies will eventually be resolved, the access to effective remedies against the AI-driven conduct of EU actors, such as Frontex, requires specific attention on the role and powers of the EDPS. In order to effectively safeguard fundamental rights of individuals affected by the EU uses of AI, the EDPS will need to divert its role and powers by carefully crafting requirements under EU data protection rules in light of their potential interplay with AI Act requirements.

15.4 Double-hatting the EDPS

The AI Act envisions a central role for the EDPS to oversee AI uses by the EU actors, including Frontex. To appraise the potentials of this role, this section explores how the AI Act, in conjunction with the EU DPR, construe the EDPS’ competence and whether they do so with sufficient clarity so as to contribute to mitigating the above-identified risks to fundamental rights.

As explained above, under the EU DPR, the EDPS is responsible for ensuring that any processing of personal data by EU institutions, bodies, offices, and agencies respects the fundamental rights and freedoms of natural persons (Article 52(2)). To that end, the Supervisor, among others, receives and handles complaints from the data subjects (Article 63 EU DPR). Through this redress mechanism, individuals are given the possibility to take control over their data and seek remedies for any breaches of their rights as data subjects.Footnote 133 However, the mere existence of the possibility to complain about potential breaches of data subjects’ rights under the EU DPR does not necessarily guarantee the EU actors’ compliance with fundamental rights more broadly. Indeed, to that end, pursuant to Article 64 EU DPR, individuals also enjoy the right to an effective judicial remedy before the Court of Justice of the EU, including through direct claims for damages and appeals against the decision of the EDPS.Footnote 134

Under the AI Act, the EDPS’ role in remedying potential violations of fundamental rights is, however, less clear. The definition of ‘national competent authority’ under Article 3(3), specifies that for AI development and uses by EU actors, ‘references to national competent authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data Protection Supervisor.’. For the latter, Article 74(9) further specified that wherever the AI Act applies to the EU actors, the EDPS shall be the designated supervisory authority, ‘except in relation to the [CJEU]acting in its judicial capacity.’. This opens up the question, however: what are the powers and competences of the EDPS as a market surveillace authority in respect of the AI Act as a product safety regulation and what does that mean for the individuals’ access to effective remedies?

Under the AI Act, the EDPS will assume diverse tasks with respect to the enforcement of the AI Act’s obligations. There is at the moment, however, no clear gap in the procedural possibility to lodge complaints with the EDPS under the AI Act. In contrast to the envisioned right to lodge a complaint with the national market surveillance authority under Article 85 AI Act, the AI Act does not afford the same right to lodge a complaint with the EDPS, akin to Article 63 EU DPR. Nor, therefore, does the AI Act grant the right to an effective judicial remedy against the decisions of the EDPS concerning EU actors' deployment of AI systems with the requirements of the AI Act., akin to Article 64 EU DPR (and in light of the requirements of Article 47 of the Charter). Without a direct procedural access to remedies, individuals will thus have to rely on their rights as data subjects under the EU DPR in seeking protection against the potential violations by the EU actors’ use of AI, despite the existence of clear obligations falling on the latter.

Instead, under the AI Act framework, the EDPS will assume a threefold role, as (a) ‘a market surveillance authority’ (Article 74(9)), (b) an “observer” within the new European AI Board (Article 65(2)), and (c) the designer of regulatory sandboxes for EU actors (Article 57(3).).

First, in its capacity as a market surveillance authority, the EDPS will undertake conformity assessments (a form of ex ante compliance mechanism) for the EU actors’ uses of AI and notify the outcomes of these assessments to the Commission.Footnote 135 While on the face of it a clear task, the AI Act requires that deployers of highrisk AI systems that are bodies governed by public law, or private entities providing public services and deployers of certain high-risk AI systems, such as banking or insurance entities, should carry out a fundamental rights impact assessment prior to putting it into use, according to Article 27. without clearly encompassing this within the mandate of the EDPS. As a whole, the conformity assessment procedure with respect to the EU actors’ development and deployment of AI is rather underspecified. This perhaps calls into question the EU legislators’ choice of a single regulatory instrument as opposed to, for instance, a separate, more targeted regulation governing the EU actors’ obligations, akin to the EU DPR.

Second, the EDPS will play a further role within the newly established European Artificial Intelligence Board (hearafter the AI Board).Footnote 136 The AI Board should advise and assist the Commission and the Member States to facilitate the consistent and effective application (Article 66), including by facilitating coordination and harmonisation of practices of national competent authorities, collecting and sharing technical and regulatory expertise and best practices; issuing recommendations and written opinions on any relevant matters related to the implementation of the AI Act; and other advisory and coordinating tasks aimed at improving the implementation of the AI Act as a whole.Footnote 137 Perhaps akin to the role of the European Data Protection Board in its advisory capacity,Footnote 138 by itself this role will not constitute a remedial avenue for individuals to ask for an effective review of EU actors’ uses of AI systems, as the Board will not possess any direct enforcement powers.Footnote 139

Lastly, the EDPS will also participate in the organisation of regulatory sandboxes for the development, testing and validation of innovative AI systems at the Union level, before they are deployed. The policy option of regulatory sandboxes has emerged as experimental regulatory method aim to address the uncertainty of the AI industry and its associated knowledge gaps, with the intention to enable smaller companies to prepare for the necessary conformity assessments.Footnote 140 Pursuant to Article 57, the sandboxes shall provide a, ‘controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the providers or prospective providers and the competent authority’ Research in the field of AI and ethics has compared the reliance on regulatory sandboxes to ‘nurturing moral imaginations’.Footnote 141 Pursuant to the third paragraph of Article 53, the AI regulatory sandboxes ‘shall not affect the supervisory and corrective powers of the competent authorities’ and ‘any significant risks to fundamental rights, democracy and rule of law, health and safety or the environment identified during the development and testing of such AI systems shall result in immediate and adequate mitigation’. (Article 57(11)). The EDPS will be tasked with organising such sandboxes at the EU level (Article 57(3)). In this context, the EDPS shall provide guidance and supervision within the sandbox with respect to identifying risks, in particular to fundamental rights, among others, and to demonstrating mitigation measures and their effectiveness to mitigate the identified risks. A relative novelty in EU law, regulatory sandboxes emerge as a form of ‘experimental legal regime’, which, according to Ranchordas, can ‘waive [or] modify national regulatory requirements (or implementation)’ as a way of offering ‘safe testbeds for innovative products and services without putting the whole system at risk’.Footnote 142 As a relatively new legal phenomenon, there is a lack of empirical knowledge about their potential usefulness to improve fundamental rights protection.

In view of the many difficulties in lodging complaints in the digital context,Footnote 143 the fundamental rights–protecting role of the EDPS is much wider under the EU DPR rules.Footnote 144 Beyond ensuring the EU actors’ compliance with the data subjects’ rights, the EDPS’ role entails promoting public awareness, conducting investigations, advising the EU institutions, adopting soft-law guidelines and clarifying the data protection requirements, authorising contractual clauses, and many others. In this respect, the supervisory role of the EDPS is likely to continue in its existing fashion with respect to the EU’s uses of AI applications, making a direct reference to the new AI-specific requirements enumerated under the AI Act in tandem with the data protection requirements. Indeed, for instance, the current EDPS has taken a firm stance on the AI-driven data processing activities of the EU agencies, including Frontex and Europol.Footnote 145

Navigating the landscape of exceptions and derogations with respect to data uses, especially in the law enforcement context, will, however, continue to undermine the EU’s efforts to ensure a human-centred use and deployment of AI meant to ensure full respect for fundamental rights. In light of the ongoing technological empowerment of the EU agencies, as exemplified by the expanding role of Frontex,Footnote 146 more structural adjustments of the EDPS’ powers and tasks vis-a-vis AI-powered EU conduct might be necessary for the effective enforcement of rights under the fragmented legal frameworks, instead of merely introducing more rights and obligations.Footnote 147 For now, direct protection of fundamental rights against the uses of AI by EU actors will remain primarily within the power of the EDPS under the remedial avenues stemming from the EU DPR. Accordingly, the way the Supervisor will apply the new AI-specific rules in conjunction with the individuals’ rights as data subjects will be crucial to furthering the protection of fundamental rights in the AI-driven conduct of the EU actors, such as Frontex.

15.5 Conclusion

Illustrated with the case of EU agencies like Frontex, which have spearheaded the development and deployment of AI for border surveillance purposes, the chapter assessed their risks to fundamental rights and the affected persons’ possibilities to remedy the likely violations. By examining two examples of AI uses by Frontex – automated risk assessments under the new ETIAS system and AI-powered aerial surveillance for border response – the chapter demonstrates diverse risks to fundamental rights, including privacy, personal data protection, non-discrimination, and the right to asylum.

In light of these concerns, the chapter highlighted the challenges for access to remedies against AI uses by the EU actors, including to procedural rights to good administration and effective judicial remedy and in the current remedial set up under the emerging framework for regulating AI – the AI Act. Examining the limis of the AI Act in determining a concrete role of the European Data Protection Supervisor (EDPS) the chapter calls for further restructuring of the EDPS powers with respect to fundamental rights protection in view of its combined mandate under the EU’s data protection and AI frameworks. With the identified gaps still in place, including the lack of a direct remedy against the EU actors’ use of AI under the AI Act, the EDPS will play a central role in guaranteeing the legal protection of fundamental rights in the emerging AI-powered conduct. To undertake this role effectively, the gaps identified in this chapter will need to be carefully addressed.

Footnotes

1 Charter of Fundamental Rights of the European Union [2016] OJ C202/389 (CFR).

2 Commission, ‘Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ COM (2021) 206 final, (AI Act Proposal). This chapter was finalised in September 2023, and revised in May 2024. Therefore, this contribution takes into account the latest available draft of 16.4.2024 – the Corrigendum to the position of the European Parliament adopted at first reading on 13 March 2024 with a view to the adoption of Regulation (EU) 2024/[…] of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) P9_TA(2024)0138 (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)).

3 AI Act, art 3(1).

4 Regulation (EU) 2018/1726 of 14 November 2018 and amending Regulation (EC) No 1987/2006 and Council Decision 2007/533/JHA and repealing Regulation (EU) No 1077/2011 [2018] OJ L295/99.

5 Chris Jones, Ana Valdivia, and Jane Kilpatrick, ‘Funds for Fortress Europe: Spending by Frontex and Eu-LISA’ (Statewatch, 28 January 2022) <www.statewatch.org/analyses/2022/funds-for-fortress-europe-spending-by-frontex-and-eu-lisa/>. As the authors report, the total amount spent by eu-LISA on contracts with the private sector between 2014–2020 alone was €1.5 billion.

6 Footnote Ibid. Another contract worth €140 million was agreed with a consortium made up of Atos, IBM, and Leonardo (formerly Finmeccanica) for the additional work on the BMS.

7 Regulation (EU) 2019/1896 of the European Parliament and of the Council of 13 November 2019 on the European Border and Coast Guard and repealing Regulations (EU) No 1052/2013 and (EU) 2016/1624 [2019] OJ L295/1.

8 Jones, Valdivia, and Kilpatrick (Footnote n 5).

9 Frontex, ‘Artificial Intelligence-Based Capabilities for the European Border and Coast Guard: Final Report’ (European Border and Coast Guard Agency 2021) <https://frontex.europa.eu/publications/artificial-intelligence-based-capabilities-for-the-european-border-and-coast-guard-final-report-CYyjoe> s 2.2.

10 Footnote Ibid annex C.

11 Giovanni De Gregorio and Sofia Ranchordás, ‘Breaking down Information Silos with Big Data: A Legal Analysis of Data Sharing’ in Joe Cannataci, Valeria Falce, and Oresto Pollicino (eds), Legal Challenges of Big Data (Edward Elgar 2020).

12 Simona Demková, Automated Decision-Making and Effective Remedies: The New Dynamics in the Protection of EU Fundamental Rights in the Area of Freedom, Security and Justice (Edward Elgar 2023) ch 2.

13 Niovi Vavoula, Immigration and Privacy in the Law of the European Union: The Case of Information Systems (Brill Nijhoff 2022) <https://brill.com/view/title/35886>.

14 The PNR Scheme refers to the EU regime set up by Directive (EU) 2016/681 enabling national law enforcement authorities to process and automatically analyse potential security risks among the passengers on the EU’s external and/or internal flights. Directive (EU) 2016/681 of the European Parliament and of the Council of 27 April 2016 on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terrorist offences and serious crime [2016] OJ L119/132; see also Julien Jeandesboz, ‘Ceci n’est Pas Un Contrôle: PNR Data Processing and the Reshaping of Borderless Travel in the Schengen Area’ (2021) 23 European Journal of Migration and Law 431.

15 Paul Quinn and Gianclaudio Malgieri, ‘The Difficulty of Defining Sensitive Data – The Concept of Sensitive Data in the EU Data Protection Framework’ (2021) 22 German Law Journal 1583.

16 Laurent Beslay and Javier Galbally, ‘Fingerprint Identification Technology for Its Implementation in the Schengen Information System II (SIS-II)’ [2015] JRC Science for Policy Report EUR 27473, <https://publications.jrc.ec.europa.eu/repository/handle/JRC97779> 100; Joint Research Centre, Study on Fingermark and Palmmark Identification Technologies for Their Implementation in the Schengen Information System (Publications Office of the EU 2019) <http://publications.europa.eu/publication/manifestation_identifier/PUB_KJNA29755ENN>; Niovi Vavoula, ‘Artificial Intelligence (AI) at Schengen Borders: Automated Processing, Algorithmic Profiling and Facial Recognition in the Era of Techno-Solutionism’ (2021) 23 European Journal of Migration and Law 457.

17 Regulation (EU) 2018/1240 of the European Parliament and of the Council of 12 September 2018 establishing a European Travel Information and Authorisation System (ETIAS) and amending Regulations (EU) No 1077/2011, (EU) No 515/2014, (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226, OJ L236/1 (ETIAS Regulation).

18 ‘What Is ETIAS’ (Travel-europe.europa.eu) <https://travel-europe.europa.eu/etias/what-etias_en> .

19 ETIAS Regulation, art 4.

20 Footnote Ibid arts 5–7; See also ‘Eu-LISA – Core Activities’ (eulisa.europa.eu) <www.eulisa.europa.eu/Activities>.

21 ETIAS Regulation, art 20.

22 The latter include the ETIAS Central System, the Interpol Stolen and Lost Travel Document database (SLTD), and the Interpol Travel Documents Associated with Notices database (TDAWN).

23 ETIAS Regulation, art 33.

24 Footnote Ibid art 34.

25 Footnote Ibid art 33(2) and (3).

26 Paulina Jo Pesch, Diana Dimitrova, and Franziska Boehm, ‘Data Protection and Machine-Learning-Supported Decision-Making at the EU Border: ETIAS Profiling Under Scrutiny’ in Agnieszka Gryszczyńska and Others (eds), Privacy Technologies and Policy (Springer International 2022).

27 ETIAS Regulation, art 33(4).

28 GDPR, art 4(4) defines profiling as ‘any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements’. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1 (GDPR).

29 GDPR, art 22; and its equivalent Regulation (EU) 2018/1725, art 24.

30 Case C-817/19 Ligue des droits humains v Conseil des ministers [2022] ECLI:EU:C:2022:491, para 189 and 196.

31 ‘ETIAS’ (Frontex.europa.eu) <https://frontex.europa.eu/what-we-do/etias/>.

32 ETIAS Regulation, arts 32 and 37.

33 Footnote Ibid art 38(2).

34 Footnote Ibid art 32(3).

35 Footnote Ibid recital (25).

36 Originally Regulation (EU) No 1052/2013 of the European Parliament and of the Council of 22 October 2013 establishing the European Border Surveillance System (EUROSUR) [2013] OJ L295/11, which was repealed by Regulation (EU) 2019/1896 of the European Parliament and of the Council of 13 November 2019 on the European Border and Coast Guard and repealing Regulations (EU) No 1052/2013 and (EU) 2016/1624 [2019] OJ L295/1 (EBCG Regulation), section 3. The use of EUROSUR is governed under the Commission Implementing Regulation (EU) 2021/581 of 9 April 2021 on the situational pictures of the European Border Surveillance System (EUROSUR), C/2021/2361 [2021] OJ L124/3 (EUROSUR Implementing Regulation).

37 EUROSUR Implementing Regulation, arts 1 and 2.

38 EBCG Regulation, art 28.

39 Footnote Ibid art 69.

40 Raluca Csernatoni, ‘Constructing the EU’s High-Tech Borders: FRONTEX and Dual-Use Drones for Border Management’ (2018) 27 European Security 175.

42 European Union Agency for Fundamental Rights, ‘How the Eurosur Regulation Affects Fundamental Rights’ <https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-eurosur-regulation-fundamental-rights-impact_en.pdf> 4.

43 Simona Demková, ‘The Decisional Value of Information in European Semi-Automated Decision-Making’ (2021) 14 Review of European Administrative Law 29.

44 AI Act, Preamble (1) and art (1)(1).

45 Consolidated Version of the Treaty on European Union [2016] OJ C202/13 (TEU).

46 Haroon Sheikh, Corien Prins, and Erik Schrijvers, Mission AI: The New System Technology (Springer International 2023) <https://link.springer.com/10.1007/978-3-031-21448-6>.

47 Article 1 of the Charter. See, Paola Inverardi, ‘The Challenge of Human Dignity in the Era of Autonomous Systems’ in Hannes Werthner and Others (eds), Perspectives on Digital Humanism (Springer International 2022) <https://doi.org/10.1007/978-3-030-86144-5_4>; Sean Kanuck, ‘Humor, Ethics, and Dignity: Being Human in the Age of Artificial Intelligence’ (2019) 33 Ethics & International Affairs 3.

48 High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, 2019, 39; Closely mirroring the OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, 2022, 7; and Organisation of Economic Cooperation and Development, ‘OECD Principles on Artificial Intelligence’ <https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449> accessed 29 November 2021.

49 Case C-4/73 Nold [1977] ECLI:EU:C:1974:51, para 14.

50 Takis Tridimas and Giulia Gentile, ‘The Essence of Rights: An Unreliable Boundary?’ (2019) 20 German Law Journal 794.

51 Plixavra Vogiatzoglou and Peggy Valcke, ‘Two Decades of Article 8 CFR: A Critical Exploration of the Fundamental Right to Personal Data Protection in EU Law’ [2022] Research Handbook on EU Data Protection Law 11; see also Opinion of AG Sharpston in Cases C-92/09 and C-93/02, Volker und Markus Schecke GbR v Land Hessen [2010] ECLI:EU:C:2010:353, para 71.

52 European Union Agency for Fundamental Rights, ‘Getting the Future Right – Artificial Intelligence and Fundamental Rights’ (14 December 2020) <https://fra.europa.eu/en/publication/2020/artificial-intelligence-and-fundamental-rights> 61; Orla Lynskey, ‘Deconstructing Data Protection: The “Added Value” Of A Right To Data Protection In The EU Legal Order’ (2014) 63 International & Comparative Law Quarterly 569.

53 As the secondary rules of EU data protection law that put the right enshrined in Article 8 CFR into use expand the rights of data subjects beyond those explicitly listed in Article 8 CFR (Chapter 3 of the GDPR), Charter of Fundamental Rights of the European Union [2012] OJ C326/02 (CFR) art 8.

54 Evelien Brouwer, ‘Legality and Data Protection Law: The Forgotten Purpose of Purpose Limitation’ in Leonard FM Besselink, F Pennings and Sacha Prechal (eds), The eclipse of the legality principle in the European Union (Kluwer Law International 2011).

55 Interoperability means ‘the ability of information systems to exchange data and to enable the sharing of information. It is about a targeted and intelligent way of using existing data to best effect, without creating new databases or changing the access rights to the existing information systems’. European Commission, ‘Security Union: Closing the Information Gap’ <https://home-affairs.ec.europa.eu/system/files_en?file=2019-04/20190416_agenda-security-factsheet-closing-information-gaps_en.pdf>.

56 Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of borders and visa [2019] OJ L135/27; and Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of police and judicial cooperation, asylum and migration [2019] OJ L135/85 (the Interoperability Regulations). The systems are foreseen to become interoperable in 2024.

57 De Gregorio and Ranchordás (Footnote n 11).

58 Francesca Galli, ‘Interoperable Databases: New Cooperation Dynamics in the EU AFSJ?’ (2020) 26 European Public Law 109; Statewatch, ‘Frontex and Interoperable Databases: Knowledge as Power?’ (Statewatch 2023) <www.statewatch.org/frontex-and-interoperable-databases-knowledge-as-power/>.

59 ETIAS Regulation art 22.

60 Article 29 Data Protection Working Party, ‘Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679, Adopted on 3 October 2017, as Last Revised and Adopted on 6 February 2018, 17/EN WP251rev.01’ <https://ec.europa.eu/newsroom/article29/items/612053>.

61 ETIAS Regulation art 21(1).

62 Footnote Ibid art 22.

63 Luisa Marin and Kamila Krajčíková, ‘Deploying Drones in Policing Southern European Borders: Constraints and Challenges for Data Protection and Human Rights’ in Aleš Završnik (ed), Drones and Unmanned Aerial Systems: Legal and Social Implications for Security and Surveillance (Springer International Publishing 2016) <https://doi.org/10.1007/978-3-319-23760-2_6>; Csernatoni (Footnote n 40).

64 Gianclaudio Malgieri, ‘Automated Decision-Making and Data Protection in Europe’ (2022) Research Handbook on Privacy and Data Protection Law 433.

65 ETIAS Regulation, art 17(2).

66 Malgieri (Footnote n 64).

67 Melissa Heikkila, ‘Dutch Scandal Serves as a Warning for Europe over Risks of Using Algorithms’ (politico.eu, 29 March 2022) <www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/>; ‘Boete Belastingdienst voor zwarte lijst FSV’ (Autoriteit Persoonsgegevens, 12 April 2022) <https://autoriteitpersoonsgegevens.nl/actueel/boete-belastingdienst-voor-zwarte-lijst-fsv>.

68 GDPR, art 22(4) in conjunction with art 9.

69 Demková, Automated Decision-Making and Effective Remedies (Footnote n 12) 34–36.

70 Case C-817/19 Ligue des droits humains v Conseil des ministers [2022] ECLI:EU:C:2022:491.

71 Footnote Ibid para 204.

72 Footnote Ibid para 203.

73 Footnote Ibid paras 205–208.

74 European Union Agency for Fundamental Rights, ‘Getting the Future Right’ (Footnote n 52) 69.

75 Ligue des droits humains (Footnote n 70) para 210.

76 ‘MEPs to Grill Frontex Director on Agency’s Role in Pushbacks of Asylum-Seekers’ (European Parliament, 30 November 2020) <www.europarl.europa.eu/news/en/press-room/20201126IPR92509/meps-to-grill-frontex-director-on-agency-s-role-in-pushbacks-of-asylum-seekers>; Judith Sunderland and Lorenzo Pezzani, ‘Airborne Complicity: Frontex Aerial Surveillance Enables Abuse’ (Human Rights Watch, 12 August 2022) <www.hrw.org/node/383557>.

77 Abbas Azimi and Others, ‘The Crotone Cover Up’ (Lighthouse Reports, 2 June 2023) <www.lighthousereports.com/investigation/the-crotone-cover-up/>.

78 Frontex, ‘Technical and Operational Strategy for European Integrated Border Management’ <http://op.europa.eu/en/publication-detail/-/publication/2123579d-f151–11e9-a32c-01aa75ed71a1>; see also the collections in Miroslava Scholten and Michiel Luchtman (eds), Law Enforcement by EU Authorities Implications for Political and Judicial Accountability (Edward Elgar 2017).

79 The 123 page report was published in full by the German freedom of information newspapers Frag Den Staat, Lighthouse Reports, and Der Spiegel, to whom the report was first leaked, available at <https://cdn.prod.www.spiegel.de/media/00847a5e-8604-45dc-a0fe-37d920056673/Directorate_A_redacted-2.pdf>.

80 ‘Frontex Failing to Protect People at EU Borders’ (Human Rights Watch, 23 June 2021) <www.hrw.org/news/2021/06/23/frontex-failing-protect-people-eu-borders>; ‘EU: Frontex Complicit in Abuse in Libya’ (Human Rights Watch, 12 December 2022) <www.hrw.org/news/2022/12/12/eu-frontex-complicit-abuse-libya>.

81 See also Sir Elihu Lauterpacht and Daniel Bethlehem, ‘The Scope and Content of the Principle of Non-Refoulement: Opinion’ in Erika Feller, Volker Türk, and Frances Nicholson (eds), Refugee Protection in International Law (Cambridge University Press 2003); Rebecca M M Wallace, ‘The Principle of Non-Refoulement in International Refugee Law’ in Vincent Chetail and Céline Bauloz (eds), Research Handbook on International Law and Migration (Edward Elgar 2014).

82 European Union Agency for Fundamental Rights, ‘How the Eurosur Regulation Affects Fundamental Rights’ (Footnote n 42).

83 Case C‑72/15 Rosneft [2017] ECLI:EU:C:2017:236, para 73; Case C-216/18 PPU Minister for Justice and Equality v LM [2018] ECLI:EU:C:2018:586, para 51.

84 See Case C-166/13 Mukarubega v Seine-Saint-Denis [2014] ECLI:EU:C:2014:2336, paras 43–49; Case C-521/15 Spain v Council [2017] ECLI:EU:C:2017:982, para 89; Case C-604/12 N [2014] ECLI:EU:C:2014:302, para 49; or the more recent Joined Cases C-225/19 and C-226/19 R.N.N.S., K.A. v Minister van Buitenlandse Zaken [2020] ECLI:EU:C:2020:951, para 34.

85 Herwig C H Hofmann and Bucura Catalina Mihaescu-Evans, ‘The Relation between the Charter’s Fundamental Rights and the Unwritten General Principles of EU Law: Good Administration as the Test Case’ (2013) 9 European Constitutional Law Review 73.

86 Case C-269/90 TUM [1991] ECLI:EU:C:1991:438, para 14.

87 Case C-16/90 Nölle v Hauptzollamt Bremen-Freihafen [1991] ECLI:EU:C:1991:402, para 29.

88 Herwig C H Hofmann, ‘The Duty of Care in EU Public Law – A Principle between Discretion and Proportionality’ (2020) 13 Review of European Administrative Law 87; Demková, Automated Decision-Making and Effective Remedies (Footnote n 12) ch 6.

89 Simona Demková and Herwig C H Hofmann, ‘General Principles of Procedural Justice’ in Katja Ziegler, Päivi Neuvonen, and Violeta Moreno-Lax (eds), Research Handbook on General Principles of EU Law: Constructing Legal Orders in Europe (Edward Elgar 2022).

90 Demková, Automated Decision-Making and Effective Remedies (Footnote n 12) 175–178.

91 Saar Alon-Barkat and Madalina Busuioc, ‘Human–AI Interactions in Public Sector Decision Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice’ (2023) 33 Journal of Public Administration Research and Theory 153.

92 Demková and Hofmann (Footnote n 89).

93 R.N.N.S., K.A. v Minister van Buitenlandse Zaken (Footnote n 84) para 43.

94 Gloria González Fuster and Others, ‘The Right to Lodge a Data Protection Complaint: OK, but Then What? An Empirical Study of Current Practices under the GDPR’ (Data Protection Law Scholars Network and Access Now, 2022) <www.accessnow.org/cms/assets/uploads/2022/06/Complaint-study-Final-version-before-design-June-15.pdf>.

95 Simona Demková, ‘Enforcing Remedies: The Challenges of Automatisation for Effective Oversight’ in Katalin Ligeti and Kei Hannah Brodersen (eds), Studies on enforcement in multi-regulatory systems (Nomos 2022). See also Eliantonio in this volume, Chapter 13.

96 Case T-600/21 WS and Others v Frontex ECLI:EU:T:2023:492 and the comment by Melanie Fink and Jorrit Rijpma, ‘The EU General Court’s Judgment in the Case of WS and Others v Frontex: Human Rights Violations at EU External Borders Going Unpunished’ (EU Law Analysis, 22 September 2023) <https://eulawanalysis.blogspot.com/2023/09/the-eu-general-courts-judgment-in-case.html> accessed 2 October 2023. For discussion on access to damages, see Fink, Rauchegger, and De Coninck in this volume, Chapter 2.

97 ETIAS Regulation, arts 25 and 26.

98 Footnote Ibid art 29.

99 Footnote Ibid art 29(4).

100 Napoleon Xanthoulis, ‘Administrative Factual Conduct: Legal Effects and Judicial Control in EU Law’ (2019) 12 Review of European Administrative Law 39.

101 Demková, ‘The Decisional Value of Information in European Semi-Automated Decision-Making’ (Footnote n 43) 48.

103 Commission, ‘Artificial Intelligence for Europe’ COM (2018) 237 final.

104 AI Act (Footnote n 2).

105 AI HLEG, Ethics Guidelines for Trustworthy AI, 2019, available at <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai> 39.

106 Consolidated Version of the Treaty on the Functioning of the European Union [2016] OJ C202/47 (TFEU).

107 Generally, the list used is defined in Article 2(2) of Council Framework Decision 2002/584/JHA or in reference to crimes that are punishable by a custodial sentence or a detention order for a maximum period of at least three years. The classification of high-risk AI uses is provided in Annex III. Council Framework Decision of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States [2002] OJ L190/1 (Council Framework Decision 190/1).

108 AI Act art 6(3) states: ‘[...] an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.’

109 AI Act art 3(2). Luca Bertuzzi, ‘AI Act: EU Parliament’s Crunch Time on High-Risk Categorisation, Prohibited Practices’ (www.euractiv.com, 7 February 2023) <www.euractiv.com/section/artificial-intelligence/news/ai-act-eu-parliaments-crunch-time-on-high-risk-categorisation-prohibited-practices/>; see also Council Framework Decision 190/1, preamble (14).

110 AI Act art. 2.

111 AI Act arts 2(3) and (4).

112 AI Act Proposal, art 83(1).

113 AI Act 111(1).

114 European Data Protection Supervisor, ‘EDPS – EDPB Joint Opinion on the Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ (EDPS.eu, 2021) <https://edps.europa.eu/node/7140_en>.

115 Lilian Edwards, ‘Expert Opinion: Regulating AI in Europe: Four Problems and Four Solutions’ (Ada Lovelace Institute 2022) <www.adalovelaceinstitute.org/report/regulating-ai-in-europe/>. As Lilian Edwards points out, the Proposal takes an ‘essentially individualistic approach to fundamental rights’ lacking instead a more ‘systematic concern for groups’ rights and interests’. The latter, according to Edwards, may differ from the traditional categories of groups under anti-discrimination law. Indeed, algorithmic processing of personal data may lead to new algorithmically constituted categories of groups that deserve greater attention. But see AI Act Proposal, art 10(3) on training data in datasets.

116 Other legal instruments governing personal data processing composing the EU data protection framework include for instance the e-Privacy Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) [2002] OJ L201/37 (e-Privacy Directive).

117 Diana Dimitrova, ‘Data Protection within Police and Judicial Cooperation’ in Herwig C H Hofmann, Gerard C Rowe, and Alexander H Türk (eds), Specialized Administrative Law of the European Union: A Sectoral Review (Oxford University Press 2018). As the author reminds, Article 87(2)(a) TFEU laying down the legal basis for police cooperation between the EU Member States mandated the EU to establish rules on ‘relevant information’ processing.

118 Teresa Quintel, Data Protection, Migration and Border Control: The GDPR, the Law Enforcement Directive and Beyond (Bloomsbury 2022) <www.bloomsbury.com/uk/data-protection-migration-and-border-control-9781509959648/>.

119 GDPR, art 77; EU DPR, art 63; Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA [2016] OJ L119/89 (LED) art 52.

120 GDPR, arts 78–79; EU DPR, art 64; LED, arts 53–54.

121 GDPR, art 80; EU DPR, art 67; LED, art 55.

122 Sergio Carrera and Marco Stefan, ‘Complaint Mechanisms in Border Management and Expulsion Operations in Europe: Effective Remedies For Victims of Human Rights Violations?’ (Centre for European Policy Studies 2018) <www.ceps.eu/ceps-publications/complaint-mechanisms-border-management-and-expulsion-operations-europe-effective/>; Fuster and Others (Footnote n 93); Demková, Automated Decision-Making and Effective Remedies (Footnote n 12).

123 GDPR, arts 15–22; EU DPR, arts 14–24; LED, arts 12–18.

124 See, for instance, GDPR, recitals (75) and (85) and arts 9 and 22(4).

125 As the Court reminds us, an act under EU law should ‘be interpreted, as far as possible, in such a way as not to affect its validity and conformity with primary law as a whole, and in particular, with the provisions of the Charter’. See Ligue des droits humains (Footnote n 70) para 86 with reference to Case C-481/19 Consob [2021] ECLI:EU:C:2021:84, para 50 and the case law cited.

126 Any such limits must be established in law, respect the essence of the right, and be necessary and proportionate to the objectives sought.

127 Edwards (Footnote n 115).

128 For an in-depth analysis of the remedial set up under the AI Act from a comparative perspective, see De Gregorio, Giovanni and Demková, Simona, The Constitutional Right to an Effective Remedy in the Digital Age: A Perspective from Europe (January 31, 2024). In van Oirsouw, Ch., de Poorter, J.; Leijten, I.; van der Schyff, G.; Stremler, M.; de Visser, M. (eds), European Yearbook of Constitutional Law (forthcoming, 2024), Available at SSRN: https://ssrn.com/abstract=4712096 or http://dx.doi.org/10.2139/ssrn.4712096

129 Drawing on links and discrepancies between the GDPR and the AI Act identified in the study by Artur Bogucki and Others, ‘The AI Act and Emerging EU Digital Acquis: Overlaps, Gaps and Inconsistencies’ (Centre for European Policy Studies, 2022) CEPS In-Depth Analysis s 2.1.1. <www.ceps.eu/ceps-publications/the-ai-act-and-emerging-eu-digital-acquis/>.

130 AI Act, art 3(4).

131 Bogucki and Others (Footnote n 128) 8.

132 AI Act, art. 10(5).

133 Pieter T J Wolters, ‘The Control by and Rights of the Data Subject Under the GDPR’ (2018) 22 (1) Journal of Internet Law 6.

134 EU DPR, art 64(3).

135 AI Act as per the obligations falling upon national market surveillance authorities under articles 43 and 70.

136 Ibid art 65.

137 Footnote Ibid art 66.

138 Doubts as to this role have also been expressed by the EDPB-EDPS, ‘Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)’ (Edpb.eu, 18 June 2021) <https://edpb.europa.eu/our-work-tools/our-documents/edpbedps-joint-opinion/edpb-edps-joint-opinion-52021-proposal_en>.

139 AI Act art 66.

140 Jon Truby and Others, ‘A Sandbox Approach to Regulating High-Risk Artificial Intelligence Applications’ (2022) 13 European Journal of Risk Regulation 270.

141 Kristin Undheim, Truls Erikson, and Bram Timmermans, ‘True Uncertainty and Ethical AI: Regulatory Sandboxes as a Policy Tool for Moral Imagination’ (AI and Ethics, 24 November 2022) <https://doi.org/10.1007/s43681–022-00240-x>.

142 Sofia Ranchordás, ‘Experimental Lawmaking in the EU: Regulatory Sandboxes’ (Social Science Research Network, 22 October 2021) SSRN Scholarly Paper ID 3963810 2 <https://papers.ssrn.com/abstract=3963810>.

143 Fuster and Others (Footnote n 94).

144 EU DPR, ch VI.

145 European Data Protection Supervisor, ‘EDPS Takes Legal Action as New Europol Regulation Puts Rule of Law and EDPS Independence under Threat’ (Edps.eu, 22 September 2022) <https://edps.europa.eu/press-publications/press-news/press-releases/2022/edps-takes-legal-action-new-europol-regulation-puts-rule-law-and-edps-independence-under-threat>. See also the EDPS Supervisory Opinion on the Rules on Processing of Operational Personal Data by the European Border and Coast Guard Agency (Frontex), (Case 2022-0147) (Edps.eu, 7 June 2022) <https://edps.europa.eu/data-protection/our-work/publications/supervisory-opinions/edps-supervisory-opinion-rules_en>.

146 Anneliese Baldaccini, ‘Counter-Terrorism and the EU Strategy for Border Security: Framing Suspects with Biometric Documents and Databases’ (2008) 10 European Journal of Migration and Law 31; Sergio Carrera and Valsamis Mitsilegas, ‘Constitutionalising the Security Union’ in Sergio Carrera and Valsamis Mitsilegas (eds), Constitutionalising the Security Union: Effectiveness, rule of law and rights in countering terrorism and crime (Centre for European Policy Studies 2017).

147 Herwig C H Hofmann, Gerard C Rowe, and Alexander H Türk (eds), Specialized Administrative Law of the European Union: A Sectoral Review (Oxford University Press 2018).

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×