Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-26T08:57:56.531Z Has data issue: false hasContentIssue false

Contesting border artificial intelligence: Applying the guidance-ethics approach as a responsible design lens

Published online by Cambridge University Press:  24 October 2022

Karolina La Fors*
Affiliation:
DesignLab, University of Twente, Enschede, The Netherlands
Fran Meissner
Affiliation:
ITC-PGM, University of Twente, Enschede, The Netherlands
*
*Corresponding author. E-mail: k.lafors@utwente.nl

Abstract

Border artificial intelligence (AI)—biometrics-based AI systems used in border control contexts—proliferates as common tools in border securitization projects. Such systems classify some migrants as posing risks like identity fraud, other forms of criminality, or terrorism. From a human rights perspective, using such risk framings for algorithmically facilitated evaluations of migrants’ biometrics systematically calls into question whether these kinds of systems can be built to be trustworthy for migrants. This article provides a thought experiment; we use a bottom-up responsible design lens—the guidance-ethics approach—to evaluate if responsible, trustworthy Border AI might constitute an oxymoron. The proposed European AI Act only limits the use of Border AI systems by classifying such systems as high risk. In parallel with these AI regulatory developments, large-scale civic movements have emerged throughout Europe to ban the use of facial recognition technologies in public spaces to defend EU citizens’ privacy. The fact that such systems remain acceptable for states’ usage to evaluate migrants, we argue, insufficiently protects migrants’ lives. In part, we argue that this is due to regulations and ethical frameworks being top-down and technology driven by focusing more on the safety of AI systems than on the safety of migrants. We conclude that bordering technologies developed from a responsible design angle would entail the development of entirely different technologies. These would refrain from harmful sorting based on biometric identifications but would start from the premise that migration is not a societal problem.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Policy Significance Statement

We assess the feasibility of developing trustworthy AI systems for border control contexts using a guidance-ethics approach (GEA). Problems with current policy point to the need for more human-centered regulation of AI. Some minimum requirements that AI systems need to meet are formulated. They need to (a) mediate representations of migrants accounting for migrants’ needs, aspirations, and risks to their livelihoods, (b) not (risk) frame migrants through biometric features, (c) be migrant centric instead of anchored in political and technological infrastructures, (d) recognize the diversity of positionalities people on the move take in border spaces, and (e) refrain from negatively subjugating some more than others. The GEA may lend itself to exploring alternative technologies to meet those requirements.

1. Introduction

Crossing international borders is frequently associated with long, tedious queues, handing one’s passport to a person sitting in a raised booth, and being told to move along. At least this is the scenario if nothing about one’s documents or person raises suspicion and triggers further screening. The inconvenience of the airport queue and narratives that bind together security “problems” and “solutions”(Oliveira Martins and Jumbert, Reference Oliveira Martins and Jumbert2020) have spurred on imaginaries of seamless borders—artificial intelligence (AI) supported border spaces where movement is uninterrupted and yet controlled. Once sci-fi imaginaries, cameras, and other technologies that compile data to recognize a traveler’s identity are now a reality with a large market for the requisite technologies (Biometric Update, 2021). Biometrics-based AI systems used in border control contexts, or as we refer to them here, Border AI, raise important questions about legitimacy, proportionality, trustworthiness, and whether current moves toward regulating AI will be able to address those concerns.

Biometric systems can read bodily features and behavioral traits and rely on that data to come to algorithmically mediated decisions about how likely a person is someone who poses a risk (La Fors, Reference La Fors2016). In less technologically mediated border spaces, the color of one’s passport or certain bodily, biological, or behavioral traits might render a traveler suspect. In contexts that use Border AI, those processes of control and exclusion become increasingly hidden in technology infrastructures. In this article, we present a thought experiment (Stuart, Reference Stuart, Stuart, Fehige and Brown2017). At its outset, we pose the question, whether Border AI can be trustworthy for migrants if they were designed responsibly. By deliberating this question and linking it to migration research, work on current developments in AI regulation, and technology design thinking, we show that even with value-sensitive design approaches, truly trustworthy Border AI is not feasible.

Migration and critical data study scholars have noted that digitally rendered identities become constructed to measure risk—subjecting those on the move to risk-associated visibility (Taylor and Meissner, Reference Taylor and Meissner2019). In constructing what constitutes risk, those systems automate ideas about migration and render (migrants’) bodies machine readable (van der Ploeg and Sprenkels, Reference van der Ploeg, Sprenkels, Huub and Albert2011). This excludes some from seamlessly moving through border spaces—or from moving at all. Such exclusions, often based on iterative profiling via binary in-out classifications, result from a techno-centric framing of border control (Balayn and Gürses, Reference Balayn and Gürses2021). Such a framing focuses on the safety of AI systems rather than on the safety of migrants. However, migrants are (uniquely) subjected to Border AI’s surveillance practices, and for some, their livelihood depends on the decisions these systems shape.

We have recently witnessed growing civic protests against facial recognition technologies by public authorities in public spaces.Footnote 1 Border AI has not attracted similar attention. With this article, we thus highlight the infringements on civil liberties biometric technologies imply within the specific context of the border. We do this by linking insights from migration studies with observations about responsible design and the emergence of regulatory frameworks that rely on tweaking technology to make the system safe but not necessarily safe for all affected by it. Calls for aligning thinking about the development of AI technologies with value-sensitive design approaches have become increasingly audible over the last few years (van de Poel, Reference van de Poel2020). In the European debates, the call to adhere to basic European values such as protecting human rights frequently surfaces (González Fuster, Reference González Fuster and Peeters2021). Most available approaches to account for values offer top-down normative frameworks (Floridi and Cowls, Reference Floridi and Cowls2019) and do not take bottom-up normative perspectives of diverse stakeholders, including migrants, into account—at least not beyond tokenistic involvement. Therefore, in this article, we opt for thinking through a transdisciplinary, responsible design approach: the guidance-ethics approach (GEA) that could accommodate such perspectives when developing Border AI. We refrain from proposing alternative AI solutions for the border control context. What we offer through a thus focused thought experiment is a transdisciplinary enrichment of perspectives on Border AI.

We contextualize our discussion against the backdrop of the proposed European Artificial Intelligence Act (EAIA). The EAIA (European Commission, 2021b) was proposed to significantly restrict detrimental inferences of AI technologies infringing on European values and fundamental rights. Floridi et al. (Reference Floridi, Cowls, Beltrametti, Chatila, Chazerand, Dignum, Luetge, Madelin, Pagallo, Rossi, Schafer, Valcke and Vayena2018) note that AI has to be built for people. By placing people’s safety first, trustworthiness is enhanced because the system is safe to use, and the user is also safe from the risks of the system—while the reader finds more extensive discussions of trustworthy AI in other papers part of this special issue. For this article, we limit ourselves to noting that various European efforts propose the use of co-design approaches to enhance the trustworthiness of AI (European Commission, 2019, 2021a). The principles of diversity, non-discrimination, and fairness in particular define co-creation with stakeholders as a prerequisite (European Commission, 2020). Non-discrimination interpreted with fairness in mind implies that Border AI accounts for not just the biological or behavioral traits of migrants but also for their needs, circumstances, and concerns.

An ethics-driven design angle begs the question of whether systems that endanger access to life-sustaining resources for some can ever be safe and trustworthy? The obvious question is trustworthy and safe for whom? To engage with those questions, our thought experiment uses one specific responsible design lens, the GEA, to think about the design of Border AI. The GEA (Verbeek and Tijink, Reference Verbeek and Tijink2020) is a methodology for the ethical responsibilization of diverse stakeholders around societal problems and the role of technology in society, in our case, Border AI. The main objective of GEA is to guide technology design by exposing value priorities and calling for morally sensitive development. As a starting point, GEA focuses on the societal concerns for which technologies are introduced to assess the technology within its specific context of use, experience, and, as we highlight in this article, the context of regulation.

While it is easy to portray “the migrant” as a monolithic category, we recognize that the simple question “Who is a migrant?” links to “many political and scholarly debates around migration, racism and citizenship” (Scheel and Tazzioli, Reference Scheel and Tazzioli2022). Ideas about migration are often motivated from a statist view of migration rather than one that recognizes the enactment of individuals as migrants through bordering practices that include the use of Border AI.

To set up our thought experiment, we explore what it means to apply a responsible design lens to Border AI. We first outline what a GEA involves. We then discuss what aspects are important when focusing on the overall border control context, including the specific context of use, experience, and regulation. Subsequently, we consider the hurdles this context poses for GEA, most prominently its aim to bring together multiple stakeholders. Through these discussions, we can point to inherent power asymmetries that Border AI amplifies at the expense of some migrants already in a more dependent and vulnerable societal position. In conclusion, we note that GEA should not be an exercise of AI ethics washing (Hao, Reference Hao2019) and that the technologies we discuss will likely remain too harmful to be given the market they are projected to serve.

In light of proposed regulatory changes and approaches to AI technology, our outlook is bleak for the responsible design of Border AI. As Robert et al. (Reference Robert, Bansal and Lütge2020)) highlight, whether to deploy AI systems has to remain a central question in thinking about fair, trustworthy, and ethical AI. Looking forward, we maintain that GEA and similar approaches can pose a useful exercise to account for diverse sets of actors in imagining what roles—other than biometric AI—technologies could have in the border context. As a secondary conclusion, we show that while a radical rethinking of border technologies is needed, we ought to remain aware that this context is entangled with prevalent and changing migration regimes, policies, and corporate interests. In addition, we note that “translating” (Latour, Reference Latour1987) border control policies via data-hungry risk assessment technologies will not resolve the political tensions that shroud those policies.

2. Guidance-Ethics Approach

GEA is a methodology that generates interactive dialogs and iterative design thinking with stakeholders representing government authorities, citizens, impacted non-citizens, industries, civil society, and academia. Those dialogs should help reflect on stakeholders’ ethical values and responsibilities in developing new technologies. Such dialogs offer space to those impacted by the technology, in our case migrants, to participate in the design technologies they are subjected to. Thus, ideally, all those involved can influence a more responsible design, implementation, and use of a specific technology in a specific context. Civil society actors such as Privacy International (Privacy International, 2021) have highlighted state actors’ responsibility in designing, developing, and applying AI for migration. Basing our thought experiment on the GEA allows us to focus on a method that evaluates what values different actors bring to the technology design at its outset. This approach mitigates design being solely driven by technological lure or commercial and political objectives.

Recognizing a multi-actor responsibility and fostering co-creative approaches within a specific technology application context, such as the border control context, pose a challenge to debates about responsible AI development focused on how safe the system is. Notably, we in this article distinguish between the psychological and bodily safety of individuals and groups and the technical safety of a well-running system. Focusing on the latter type of safety is a central tenet of proposed regulations such as the EAIA. The challenge posed by including migrant voices is that the safety of those subject to AI systems needs to be considered. This aim to uphold fundamental European values should be the primary concern. Thinking about Border AI through a responsible design and guidance-ethics lens highlights that the safety of people has to be a key concern. Not least, to avoid migrants being unfairly labeled as misfits (La Fors, Reference La Fors2016). Some migrants are in jeopardy due to unsafe circumstances in their home countries, forcing them to accept sometimes highly vulnerable positions compared to host country citizens. In a balanced debate, their values and desires for a dignified life must be recognized. Such a starting point leads us to argue that the societal concern is not migration itself. Instead, the societal concern we want to highlight and that a GEA needs to focus on is how we assess and recognize the social impact of technologies that primarily affect marginalized populations by increasing their likelihood of being viewed as risks and excluded from mobility opportunities.

GEA is conducted in three phases (see Figure 1)—a contextual analysis phase, a dialog phase, and a third phase when identified values and responsibilities are considered in outlining design options. Incorporating a multi-stakeholder approach from start to finish, a GEA does not allow for discussions based solely on the deterrence of migration. With those considerations in mind, we next elaborate on the different phases of GEA (see Figure 1).

Figure 1. The three stages of the guidance-ethics approach (Verbeek and Tijink, Reference Verbeek and Tijink2020).

The first phase is the contextual analysis of what we call the border control context. By this, we mean contexts within which Border AI is planned to be used—this can be traditional border settings such as airports or border crossings. Still, other settings exist within which identities are assessed to decide the eligibility of individuals and families to enter and/or remain within a country of their choice. Guidance ethics acknowledges that Border AI systems cannot be detached from their contextual embedding, and therefore it requires commencing with contextual analysis. Reflecting upon and understanding Border AI technologies require eliciting value perceptions and responsibilities of diverse stakeholders. In the following context analysis, we focus on the border control context as a context of use and experience and as a context significantly shaped by relevant legislation that importantly links the various actors and stakeholders.

The thought experiment in this article is built around understanding a located border—like an international airport or physical border crossings. Border AI also disperses border spaces. Our argumentation could easily be adapted to understanding spatially diffused bordering technologies (Pötzsch, Reference Pötzsch2015). Our critical evaluation of Border AI takes issue with framing such technologies as tools that can be perfected by being turned into “better” systems for the reliable and neutral securitization of the entry and exit of people into and out of different countries. Such optimization-focused thinking requires—and in the border control context legitimizes—the generation of more biometric data points. Those data are used to evaluate migrants’ matching with risk scores. In practice, this too often leads to false positives. The goal, to improve certainty about a migrant’s likelihood of posing a risk, and to prevent or minimize false positives, keeps intact narratives about needing to capture, reuse, and analyze border movements with more (biometric) data.

Relevant stakeholders are identified following and building on the contextual analysis. These stakeholders are then brought into dialog in the second phase of GEA. For border control contexts, relevant stakeholders include border guards, AI and biometric system developers, ideally government officials of the home, transit and hosting country of migrants, migrants themselves, academia, and civil society. A GEA would stipulate that their presence and interactions would be beneficial to generate ethical engagement with Border AI. The identified stakeholders would be invited to exchange about the implications of decision-making processes powered by Border AI within the border control context, including those for migrants’ lives. The latter would include (a) taking into account how migrants’ interactions with Border AI from their perspectives can become problematic and (b) mapping what the share of stakeholders and AI systems could be in the emergent societal problem. The dialog phase is aimed at eliciting and analyzing the responsibilities of stakeholders, systematically excavating the potential impact of Border AI and the values at stake.

In the third stage of the approach, these identified values and responsibilities are translated into options for action. These options concern (a) Border AI itself (a potential redesign), (b) its environment (policy-making, legislation, complementary technologies), and (c) the user (education, raising awareness, communication, empowerment). Guidance ethics invites transdisciplinary ethical and socio-technical problem-solving throughout each phase drawing on moral responsibilization techniques. In practice, those activities aim to identify the best viable action option, forming the third phase of GEA. In the following sections, this article will trace these three stages, focusing primarily on the context analysis, to ask if and how a GEA—in light of current and planned policy—might help us overcome the problems with Border AI that we noted in the introduction.

3. The Border Control Context

There is a long history of (supra-) state governing bodies deciding who belongs and who does not. Those decisions have regularly been justified based on maintaining state sovereignty but can also be seen as processes of (re)territorialization (Vigneswaran, Reference Vigneswaran2013). Political philosophy at this stage remains divided about the ethics of borders (Bartram, Reference Bartram2010; van Houtum, Reference van Houtum2012; Genova, Reference Genova2017; Paasi et al., Reference Paasi, Prokkola, Saarinen and Zimmerbauer2018; Tebble, Reference Tebble2020), with strong arguments being made in favor of open borders (Bartram, Reference Bartram2010). In this article, we cannot resolve those debates. Still, we note that Border AIs are technological translations of processes of in- and exclusion which bring along new or amplify pre-existing patterns of in- and exclusion. With the notion of translations (Latour, Reference Latour1987), we highlight how Border AI fundamentally transforms the border control context and the perceptions of migrants about how to behave and engage with this context. A particular characteristic of this translation is that migration management is translated into an information-sharing problem from which Border AI feeds. This observation is important because the first phase of a GEA requires reviewing and making sense of the context within which a specific technology will be used. To outline this context, we distinguish between the context of use, experience, and regulation to help us recognize the different stakeholders and responsibilities relevant to the dialog phase of a GEA.

3.1. The context of use: Why biometrics at the border?

The entanglement of border-technology development and ever more perilous migration journeys has repeatedly been documented (Amoore and Hall, Reference Amoore and Hall2010; Pötzsch, Reference Pötzsch2015). While large-scale databases pose essential questions about how migration is made knowable (Leese et al., Reference Leese, Noori and Scheel2021; Scheel, Reference Scheel2021), the use of Border AI poses another additional challenge. Large-scale biometric databases can be critiqued for being presented as infallible and neutral while they are not (Leese et al., Reference Leese, Noori and Scheel2021; van Rossem and Pelizza, Reference van Rossem and Pelizza2022). In its use context, Border AI is specific because it draws on such problematic databases and in situ evaluations of bodies as a marker of who gets to move and who does not. Pötzsch (Reference Pötzsch2015) refers to this as creating an iBorder, a border that reads the migrant’s body—through iris scans or behavioral biometrics—and makes those readings the basis of modes of in/exclusion.

Notably, the UN special rapporteur on contemporary forms of racism, racial discrimination, and xenophobia notes in a 2020 report that the rise of digitalizing borders brings with it an increased risk of discriminating against people on the move (Achiume, Reference Achiume2020). Achiume points to the threat of direct and indirect discrimination via border technology. She maps issues such as racial profiling, dependence on online platforms, and biometric and language recognition onto humanitarian and immigration surveillance practices, technological experimentation, and border externalization. Specifically focusing on identity technologies, Molnar (Reference Molnar2020); Molnar and Gill (Reference Molnar and Gill2018) has provided multiple case studies highlighting the problematic nature of those systems in action. She and her team note that the experimental use of many of those technologies is problematic. The fact is that in current modes of producing Border AI, “technological development privileges the private sector as the primary actor in charge of development, with states and governments wishing to control the flows of migrant populations benefiting from these technological experiments” (Molnar, Reference Molnar2020, p. 2).

Those kinds of experimentations often assume that migration is a problem that needs to be addressed. As noted, thinking about Border AI through a responsible design lens does not consider migration as a societal problem. Instead, the societal problem is the cementation of unequal power relations and violations of basic principles of human dignity through AI-mediated border control. Engaging in a GEA means that the design process commences with defining the societal concern at stake. Only then are the different stakeholders that should be included in the technology design processes considered. If migration is not a problem in and of itself, many narratives about migration as a risk are intellectually problematic but omnipresent within the border as a context of use. That states or relevant regulatory bodies need certainty over a migrant’s identity is regularly justified by way of distinguishing between two types of risk: how at-risk a migrant is in the countries they leave behind and what risk a migrant poses to the state they enter (La Fors and Ploeg, Reference La Fors, van der Ploeg, van der Ploeg and Pridmore2015).

The latter risk is often described as an economic, social, or security risk. These ideas about risk and risk minimization are often inscribed into Border AI. To further explain, regulatory entities invite companies claiming that they can produce systems to read migrants’ documents and increasingly behaviors and bodies to establish certainty about the level of risk. In this way, governments invite Border AI companies to assist them in creating identity certainty and risk profiles. Here processes of in- and exclusion become translated into an information-sharing problem (Latour, Reference Latour1987; La Fors, Reference La Fors2016) within this use context. Common allegations are that some people on the move are (a) identity fraudsters—claiming to be someone they are not, presumably to gain economic or social benefits, or (b) a security risk to the destination country.

Border AI is not designed to assess migrants according to the risks they might encounter before, during, or after migration but to support an exclusionary binary determination. Therefore, Border AIs, as they are currently developed, prompts additional profiling as an activity to support a risk decision and, therefore, cannot offer a fair and responsible picture of migrants’ realities. Basic ID data such as name and nationality are used to establish certainty and riskiness. Increasingly replacing those data are behavioral and biological traits. The threshold set by governments for establishing certainty about a migrant’s identity and the level of risk that a migrant poses are exponentially growing, creating an ever-expanding data vacuum for companies producing Border AI to fill. As a context of use, the border control context is one where it becomes increasingly difficult to question who is rejected and on what grounds. Those processes are becoming less and less transparent and are driven not by the desire to protect those on the move but to deter (certain types of) movement.

3.2. The context of experience

In drawing the contours of the border control context for GEA, one critical factor is who gets to be a stakeholder. In principle, the GEA approach adopts an inclusive idea of stakeholders. Yet leaving it at the level of identifying governments, technology development firms and migrants as stakeholders would not do justice to the diversity of different groupings and interests. Specifically, regarding the category of the migrant, it is a matter of fact that who is considered to be what kind of migrant is deeply enmeshed in histories of migration control and global inequalities (Scheel and Tazzioli, Reference Scheel and Tazzioli2022). Debates surrounding superdiversity have highlighted that experiences and patterns of migration vary significantly over time in terms of compositions, regulations, and categorizations (Meissner and Vertovec, Reference Meissner and Vertovec2015). A single time point sample of “typical” migrants might not be enough to explore the border as a multiplex context.

The border control context is one where, as Scheel notes: “biometric technologies for border control purposes significantly alters the practical terms and material conditions for the appropriation of mobility [by rendering] migrants’ bodies as a means of control by transforming them into data doubles” (Scheel, Reference Scheel2013, p. 597). AI-mediated biometric and administrative readability of migrants’ bodies becomes the scale of their acceptability by hosting societies. Basic assumptions about defining stakeholders are questioned in considering the border as a context of experience. The border control context is experienced in diverse ways, and there is diversity in who is experiencing the border control context. Recognizing this tension between control, experience, and actions highlights why a GEA requires us to pay disproportionate attention to the migrant perspectives (or at least the perspectives of those passing through border spaces). Their bodies are being made legible for in/exclusion. While technology developers and policymakers might be people on the move themselves, this may only become evident to them through the type of dialog GEA aspires to by including the particular experiences of migrants who followed different trajectories.

Crucially, migrants’ experiences include the receiving end of Border AI-mediated judgment. Such judgment has been highlighted elsewhere as disallowing any form of digitally expressible forgiveness toward a risk-profiled person (La Fors, Reference La Fors2020). It perpetuates negative experiences in the border space for some. From the perspective of migrants, this cannot be remedied by applying principles of fairness and discrimination because both “allow for the distribution of negative judgments or determinations on citizens so long as these determinations are proportionate” (La Fors, Reference La Fors2020). A border technology design that is attentive to the experiences of migrants could, therefore, benefit from the active adoption of forgiveness as a principle.

Additionally, there is the problem of power differentials that get hypervisibility in these contexts. To give but one recent example, a Dutch court ruling (Court of The Hague, 2021) that ethnic profiling should be allowed in border control contexts highlights that biometric markers impact policing borders and reproducing racism. The border control context in this sense is, as Metcalfe (Reference Metcalfe2021, p. 3) notes, imbued with “ongoing conflict, subjectivity and […] transformation.” In such a context, Border AI becomes yet another layer of control and surveillance that migrants have no say in, particularly those most at risk. In this sense, it is a context that presents itself as highly flawed in terms of the more inclusive imaginary that should stand at the start of a GEA lens. These flaws are potentially further exacerbated by the regulatory measures that shape this context in a way that would discourage the participation of migrants in technology development debates. That also only leaves a liminal space for that participation to be meaningful.

3.3. The regulatory context: Which and whose rules apply?

One way to assess the regulatory context is to consider what kinds of principles and relations might be desirable in this context. Debates around the draft EAIA are particularly pertinent here. For example, the EU High Level-Expert Groups’ Guidelines on Trustworthy AI prescribed the principle of ‘non-discrimination, diversity and fairness’ (European Commission, 2019) as a guiding principle for AI development. One of the aims of these guidelines is to champion AI technology designs that are inclusive of diverse societal groups. The guidelines suggest co-design approaches to achieve such inclusion or diversity. It is then necessary to reach out to and involve those impacted directly and indirectly by AI technologies. While such efforts are increasingly extended to those who have already earned a residency status, in the border control context, implementing co-design with migrants would mean disregarding the legal standing of different individuals as qualifying criteria for their participation.

A continuous shifting of rules and regulations concerning migration and the resulting complexities of legal status diversities (Meissner, Reference Meissner2017) matter in the border control context. To illustrate why this is the case, it is instructive to think about the Joint Migration Policy Whitepaper Towards ICT-enabled Integration of Migrants in Europe. This report is an outcome of six Horizon Europe projects. It calls for a “culture of co-design as an internal cultural change in public administrations” (European Commission, 2021a). The report calls for designing inclusive AI-mediated services. However, in adopting the language and logic of integration, the report misses integration policies’ destructive and exclusionary nature (Schinkel, Reference Schinkel2017). Taking such a critical reading of immigrant integration seriously points us to the multi-stakeholder nature of technology-mediated migration control (Meissner, Reference Meissner2019). It also highlights that calls for co-design require even more extensive inclusiveness as rules about who is eligible to move, shift, and change. The border control context is where the initial sorting of who is welcome and unwelcome prevents some from ever becoming part of those deemed ‘integrable’. Instead, refusing entry or redirection into deportation facilities based on biometric markers of difference is not uncommon. Those normative acts of sorting out (Bowker and Star, Reference Bowker and Star1999) often put the lives of impacted migrants on hold, as their ability to contest decisions about their status is limited (Drotbohm and Hasselberg, Reference Drotbohm and Hasselberg2014; Genova and Roy, Reference Genova and Roy2020).

Further exemplifying this is the political nature of categories (Suchman, Reference Suchman1993). Participation and co-creation, as championed by the white paper quoted above, are first steps, but they can still be exclusionary. Those systems still bar protection for those most in need of it (Genova and Roy, Reference Genova and Roy2020) and often exclude them from being called upon as relevant stakeholders in those contexts. For those migrants exposed to Border AI-mediated scrutiny to establish their status and realize their goal to enter a host country, co-design and co-creation involving a diversity of relevant stakeholders remain out of the scope of political agendas on migration. A GEA would undoubtedly call for it.

Beyond ambitions for co-design, the recently proposed EAIA also points us to essential features of how the border control context can be made sense of for a GEA. The EAIA is the first of a unique set of regulations relevant to AI systems. Its extraterritorial scope is already comparable to the effects preceding the ratification of the General Data Protection Regulation. For example, various lobby groups are gearing up to limit the Act’s impact while civil society organizations champion its broadest possible implementation. From the perspective of border control authorities, the EAIA introduces novel mechanisms for preventing the harmful effects of AI-mediated inferences by dividing systems according to their harmfulness into four primary risk categories: unacceptable risk, high risk, limited risk, and minimal risk. The Act defines both facial recognition technologies and the use of AI in border control contexts as high risk.

For facial recognition, the Act defines systems for the “real-time” and post “remote” biometric identification of natural persons as high risk. This classification would also hold in a border setting. Despite the EAIA’s benefits, Veale and Borgesius (Reference Veale and Borgesius2021) underline as a primary shortcoming of the Act that it relies on the EU’s Product Safety Directive from 1980. In 2021, the EU proposed an updated legislation—the General Product Safety Regulation (European Commission, 2021c)—to address the earlier directive’s main shortcomings. The change was necessary because the directive predated the era of algorithms, platforms, and a digital economy fueled by personal data-based predictions—what Zuboff (Reference Zuboff2019) refers to as “behavioral surplus”. It is important to note that Zuboff’s analysis focuses mainly on the US corporate context and the surplus (power) that those actors can generate. Concerning migration, what has been noted is that those processes are tightly linked up modes of experimentality not limited to corporate actors but which have “become a mode of statecraft” (Aradau, Reference Aradau2022, p. 41).

Calls for ever more biometric data in border control contexts create new realities. These, on the one hand, call for paying attention to a continuously shifting autonomy of migration (Metcalfe, Reference Metcalfe2021). The related creative engagement of migrants with borders limits the feasibility of predictive border technologies. On the other hand, it is an area where interventions constrain possible futures, requiring somewhat different ethics (Amoore, Reference Amoore2020). This also entails a crux in the possibility that current regulatory frameworks aimed at improving societal effects and governability of new technologies are not just producing surplus data but that those data may be used to infringe on basic values further down the line. We need to scrutinize those regulations in light of regulating digital economies and for how those economies are bound up in political processes and priorities.

Although the AI Act will be influenced by the newly proposed General Product Safety Regulation (European Commission, 2021c), these product safety legislations frame AI products regarding consumer safety. The AI Act highlights the importance of fundamental rights because of this framing. Still, it remains weak in generally protecting the fundamental rights of data subjects—those producing the data often as a by-product of their journey or day-to-day activities. One weakness arises because migrants—while feeding those AI systems with data—are not the consumers of those systems and thus not protected from them. A second concern is that migrants are subjected to surveillance by Border AI products which feed an apparatus of migration control, making the actual “consumers” of those systems government authorities. This practice skews the distribution of power relations within the border control context further. Government authorities already have an increasingly technology-mediated monopolistic position concerning migrants, AI developers overseeing the effects of AI systems, and how standardization bodies define standards for AI systems. Changes in the proposed General Product Safety Regulation under which AI technologies would fall still are wanting. They seem unfit to evoke adequate (to migrants’ fundamental rights) assessments. Those assessments would be evaluated according to the extent to which these technologies would be dangerous consumer products. Purchasing and deploying AI have provocatively been referred to by Edwards and Harbinja (Reference Edwards, Harbinja, Edwards, Schäfer and Harbinja2020, p. 301) as not comparable to buying a washing machine.

The Artificial Intelligence Act must also fill this disjuncture in the border control context. The EAIA proposes to significantly restrict potentially detrimental inferences, for instance, by limiting the use of facial recognition by law enforcement authorities. However, scholars have pointed out already that a major drawback of the Act is that it does not offer space for migrants’ and their data doubles informational self-determination (Veale and Borgesius, Reference Veale and Borgesius2021). It needs to be seen how the daily life consequences of these technologies will be pitted against the interests of big data companies whose business model depends upon marketing such technologies. Civil society collaboration aims to create political pressure by bringing the side-effects of facial recognition technologies more into the spotlight (see above, ReclaimYourFace campaign).

The EAIA has a risk-based approach and aims to draw the line as to what can count as acceptable, responsible, and efficient usage of AI systems. In this respect, the EAIA defines four categories of risks: (a) unacceptable risk, (b) high risk, (c) limited risk, and (d) minimal risk. Biometric systems used and embedded within networks of AI systems for border control fall under the high-risk category and require a prior risk assessment. Such assessments will also impact how companies can engage in more responsible design, implementation, and use of systems for border control but should do so less from top-down perspectives.

Experience, use, and regulation context taken together highlight how in a politically charged context such as the border, some basic assumptions of GEA are challenged. Given planned regulation, the design of Border AI will favor the system’s safety rather than the safety of those impacted by the system, which stems from the fact that motives behind installing the regulation originate from catching up with an AI technology push. Such a view is undoubtedly a significant hurdle to identifying and bringing together diverse stakeholders for a GEA dialog. This insight makes it all the more important to continue our thought experiment and lay out what we might expect to happen in following through with the two remaining phases of a GEA.

4. Bringing Actors into Dialog

As this exploration of AI within the specific context of the border has highlighted, the main specificity of the border control context pertains to the evident processes of in- and exclusion that it perpetuates. To effectively continue mapping this context, it is thus necessary to recognize the multiplicity of stakeholders within that context and circumstances shaping those processes of in- and exclusion. The second phase of a GEA would aim to bring together the many different stakeholders involved. As already noted, because of the sheer diversity of different ways the border control context is experienced, such an exercise of defining who gets to be a stakeholder poses a significant challenge to a value-sensitive design of border technologies. We cannot argue that it would be possible to account for this diversity in the dialog phase. Yet, we have tried to suggest that it would be necessary to bring those most disadvantaged and likely to suffer from AI-mediated border controls to the table. It is still difficult to imagine how a level playing field could be created if this was achieved. In practice, how would we elevate those most in need to the same level as the architects of the technology and those writing and developing the migration policies that would act as the underlying rules for implementing that technology?

Aspects of inclusion of those being least advantaged by how Border AI technologies frame their potential uniquely through the lens of risk render the likelihood for their inclusion relatively low. Power differentials are likely to remain and potentially limit meaningful dialog. Applying a GEA in the border control context to design border AI more responsibly would undoubtedly stand in contrast with the current deployment of Border AI. Those deployments, for example, by Frontex, are focused on keeping migrants at a distance rather than involving them in stakeholder dialog (Laursen, Reference Laursen2022).

Other barriers are construed by how to involve, for instance, policymakers in the home countries of those migrants who often shape why migrants want to move. Furthermore, realizing eye-to-eye dialog remains cumbersome and enshrined in statist interpretations of what the problem at hand is. Such difficulties are partly the consequence of how Border AI technologies actively create a skewed visibility of migrants by bringing only their riskiness rather than their human stories to the foreground. This function of the technology means that, for local decision-makers, migrants cannot enter and participate in the governance of civic and administrative spaces.

Migrants are not “consumers” of Border AI technologies and therefore fall outside envisioned regulations—but they are subjected to surveillance by these systems. At the same time, those who are in charge of operating, deploying, and taking the decisions of these systems are the ones who get to assess the effects and how “safe” the system is. These responsibilities would necessitate exploring the harmful side-effects of AI systems on migrants and taking their viewpoints into account—but this is not yet envisioned as necessary legislation. States cannot live up to their human rights-based responsibilities if they continue to build harmful technology without including those harmed by AI-mediated decisions. So long as border AI technology adoption occurs without accounting for societal impact on migrants or incorporating their encounters with border AI systems in border AI related-policy and design, the overall benefit and policy implication of ethically designed AI systems remains hard to measure.

5. Options for Action

Our analysis thus far shows that the outlook for a successful GEA process in the border control context is bleak. Assuming that it was possible to create a space to bring the needed diversity of stakeholders to the table, what options for actions would a GEA approach point us to? Without wanting to present possibly unhelpful speculation for the sake of our thought experiment and to consider ways forward, we will engage in this exercise. In using a GEA, there tend to be three options for action that deserve brief consideration:

5.1. Tweaking the technology

Given our analysis, this would require tweaking the technology by considering migrants not just as a risk but as people—it would mean rethinking the rationale of Border AI technologies. It would also mean tweaking the technology’s development and maintenance so that it is designed to account for human rights commitments. Research in other fields suggests that to foster this kind of rethinking of technology logic, the most eminent strides are made in settings that are not top-down supported (Powell, Reference Powell2021). Whereas any technology design aims to achieve some previously non-existent beneficial functionality, technology design as a creative process and outcome always remains suggestive in society. Therefore, tweaking the design toward more responsible AI technology in and for border contexts would require adding more positive directions to those particularly normative parts of the suggestive nature of Border AI. These are currently only directed at societal threats. Such a tweaking would further abandon one of the central facets around which those technologies are currently built biometric-based categorizations. Relying on bottom-up creativity here may seem wanting in the face of large-scale, top-down initiatives. It might become more viable if some basic tenets of current technology and migration control logics, like risk-based sorting, are seriously contested and abandoned.

5.2. Tweaking the environment and context

There is value in taking on board the insights developed in critical migration studies through in-depth research. This research points us toward the need to rethink different policy priorities such as immigrant integration in new ways that do not continuously shift the goal post for those who want to move but who might not fit what policies consider desirable migrants. We, therefore, need to consider a point that keeps on being made across the critical data studies literature: technology cannot fix broken politics.

5.3. Tweaking the user

This part of a GEA makes apparent the importance of asking: who is a user in the border control context? That question is potentially one of the central problems at play. Are the users those who commission the technology, are the users those who use the technology in a specific context, or is it those who are most intimately impacted by the technologies? As a second point, our analysis accentuates the importance of how these different types of “users” are interconnected within an uneven playing field. Interconnections are wrought with power differentials that have to be recognized. That migrants are often at the extreme end of such power differentials is shown, for example, in Buoncompagni’s work which offers testimonies on why refugees subject themselves to biometric border AI: “We register under the use of force because we have no other options to get food or protection” (quoted in Buoncompagni, Reference Buoncompagni2022). Another testimony from an Eritrean refugee depicts that immigration authorities frame border AI for refugees as a means by which livelihoods can be improved: “[the] registration system has not initiated any new package of special benefits. We are receiving the same assistance as before, such as food rations and other non-food items; I doubt that the new system will bring any benefits in terms of additional aid” (quoted in Buoncompagni, Reference Buoncompagni2022). Migrants’ compliance with Border AI is not about their participation in shaping those technologies. It is an expression of power differentials—in their current configuration, faced with border AI, choices are not meaningful and sometimes about choosing between life or death.

For a GEA approach for Border AI, it might not be necessary to “tweak” one specific user, such as migrants, but to tweak the relationships between diverse sets of users and to examine their constraints, assumptions, and power relations in an effort to rebalance norms and offer meaningful choices. This might at least move us somewhat toward a more level playing field. Doing so includes a responsibilization of the human(s), such as policymakers, AI designers, immigration officers, and others in the loop of the AI. Involvement in the production and implementation of Border AI makes different actors differently responsible for others and shows how narratives of predictive sorting are wrought with ambiguities rather than certainties. In addition, it means sensitizing actors to how their involvement with Border AI positions them in a politically charged network mediated and transformed by Border AI systems. In practical terms, in sensitizing actors through dialog about values and harms, applying the GEA could inform, for instance, the IEEE P7013 Working Group’s efforts to develop Inclusion and Application Standards for Automated Facial Analysis Technology (Goodrich, Reference Goodrich2019). The standard might thus be transformed to refraining from experimental uses of such technologies in border contexts (Molnar, Reference Molnar2020, #177). Achieving such bottom-up informed principles requires the additional efforts a GEA calls for—but it would likely mean abandoning Border AI as we defined it in the introduction.

6. Conclusions

This article presented a thought experiment—an exercise of thinking through what can be learned from examining (biometrics-based) Border AI from a responsible design perspective. We specifically used the GEA for this exercise. We choose GEA—a more bottom-up, transdisciplinary responsible design lens—as a possible alternative to existing top-down ethical, legal, and AI-design frameworks for border control. We started by situating Border AI in its context of use, experience, and regulation before commenting on how those contexts facilitate inclusive and transdisciplinary dialog and devising options for action. To sketch those contexts, we drew on international migration literature and relevant AI policy. Our assessment concluded that the current framing of the proposed EAIA is wanting in accounting for the livelihoods of migrants and truly trustworthy technology development. We found the following four reasons for this: (a) the proposed EAIA frames AI systems as products and applies an approach that is borrowed from the European product safety regulations (Veale and Borgesius, Reference Veale and Borgesius2021); (b) through this framing, the Act puts a higher priority on maintaining the safety of the system than the safety of people; (c) migrants are not the consumers of Border AI, therefore, incorporating their perspectives would not directly translate into regulatory, product safety compliance requirements for Border AI producing companies; and (d) even though the EU High-Level Expert Group’s Guidelines on Trustworthy AI set co-design as a requirement for trustworthy AI, co-designing with migrants who are subjected to the surveillance by Border AI will not be incentivized through the EAIA. Such an analysis could be interpreted as reiterating the long-standing problem of giving voice in light of diverse trajectories and experiences. By considering the use of GEA, we highlighted that it is not simply about resolving the issue of voice but how regulated technologies may act as translators of power differentials.

We, therefore, emphasize that such regulatory framing also figures in accumulating problematic circumstances pertinent to the border control context. The regulatory framing of Border AI provides an add-on to the following characteristics of the border control context: (a) it is already heavily politicized; (b) it propagates the assumption that increasing certainty about a migrants’ “integrability” requires addressing biometric data scarcity and building optimized AI-mediated Border AI; and (c) it remains focused on securitization ideals that rely on maintaining disadvantageous power-relations to maintain public security. These characteristics are all used to perpetuate the assumption that Border AI is a viable solution for fair migration management.

After considering how the GEA could be applied in the border control context, we concluded that the guidance ethics method could not fully reach its potential within that context. Beyond the raised concerns about prevailing assumptions within a border control context, this evaluation is motivated by the institutional embeddedness of Border AI systems within the administrative networks of border control. Redesigning these systems would therefore mean re-negotiating their administrative and institutional infrastructures. In other words, allowing those migrants who are often portrayed as risky to co-create Border AI systems would allow them to re-negotiate and co-create the criteria for their institutional and administrative embeddings. We note that the likelihood of hosting countries inviting migrants to participate in such exercises is low. Migrants’ participation could be interpreted as interfering in these countries’ sovereignty to define their permanent population and how their territoriality is defined. GEA supports responsible AI innovation that results from multi-stakeholder dialogs through the application of transdisciplinary value-sensitive approaches. Consequently, thus conceived responsible technologies in the border context stand in contrast with the current embeddedness of that context with control and migration management priorities.

On a more positive note, it follows that the GEA could be a valuable tool to explore what alternative technologies might emerge if migrants were included in the design of AI systems less reliant on their biometric identities. While our article did not address this, similar thought experiments—potentially followed by application—might be fruitful in other areas of technology development currently heavily relying on biometric categorizations and the potential for cementing inequalities they entail. More specifically, for the border control context, given the insights from our thought experiment, action options would need to account for dependent and vulnerable positions within current migration control systems. Proportionality in this setting would include regard for migrants’ expectations, human rights, and maintaining transparency about and the option to resist their involvement in testing Border AI systems.

The practice of involving migrants perceived as posing a risk in co-creating Border AI systems in and of itself should, however, not be thought of as a way to claim ethical technology. It may well not be possible to design trustworthy or responsible Border AI. Rather the aim should be to aspire for technology design that is responsible and perceived as necessary because migrants could meaningfully empower themselves not just in value creation but in value-sensitive politics. Consequently, in our view, a way to achieve responsible technology for use in border control contexts minimum requirements might be that those AI systems (a) mediate an identity representation of migrants’ which depicts—and therefore better accounts for—migrants’ needs, aspirations, and especially the risks they run in their home countries, (b) not to (risk) frame migrants through biometric features, (c) to anchor AI systems in their broader political and technological infrastructures in such a manner that they at least include migrant-centric requirements, and (d) recognize the diversity of positionalities that people on the move take in border spaces and how a seamless imagination of those spaces might further subjugate some more than others to measures of undue control.

Acknowledgment

We received permission to reproduce the figure we present in this article.

Funding Statement

Our article project is funded through our own employment at the University of Twente, but it is not dependent upon specifically allocated project funding.

Competing Interests

The authors declare no competing interests exist.

Author Contributions

Conceptualization: K.L.F., F.M.; Investigation: K.L.F., F.M.; Methodology: K.L.F., F.M.; Writing—original draft: K.L.F., F.M.; Writing—review and editing: K.L.F., F.M.

Data Availability Statement

We have not generated empirical data within this article, but relied on literature study. Therefore, our data stems from publicly available sources.

Footnotes

1 See: https://reclaimyourface.eu/ (accessed 15 December 2021).

References

Achiume, TE (2020) Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance. UN Spechial Rapparteur Report: A/75/590 (accessed 28 April 2021).Google Scholar
Amoore, L (2020) Cloud Ethics: Algorithms and the Attributes of Ourselves and Others/ Louise Amoore. Durham: Duke University Press.Google Scholar
Amoore, L and Hall, A (2010) Border theatre: On the arts of security and resistance. Cultural Geographies 17(3), 299319. http://doi.org/10.1177/1474474010368604CrossRefGoogle Scholar
Aradau, C (2022) Experimentality, surplus data and the politics of debilitation in borderzones. Geopolitics 27(1), 2646. http://doi.org/10.1080/14650045.2020.1853103CrossRefGoogle Scholar
Balayn, A and Gürses, S (2021) Beyond Debaising: Regulating AI and its inequalities. Available at https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf (accessed 15 December 2021).Google Scholar
Bartram, D (2010 ) International migration, open Borders debates, and happiness. International Studies Review 12(3), 339361. http://doi.org/10.1111/j.1468-2486.2010.00942.xCrossRefGoogle Scholar
Biometric Update (2021) Contactless biometrics and airport automation demand spiking ahead of passenger return. Available at www.biometricupdate.com/202104/contactless-biometrics-and-airport-automation-demand-spiking-ahead-of-passenger-return (accessed 28 April 2021).Google Scholar
Bowker, GC and Star, SL (1999) Sorting Things Out: Classification and its Consequences. Cambridge, MA: MIT Press.Google Scholar
Buoncompagni, G (2022) Connected refugees: (Liquid) surveillance or computer management of migration? Tafter Journal 119 Available at: tafterjournal.it/2022/04/15/connected-refugees-liquid-surveillance-or-computer-management-of-migration/ (accessed 16 March 2021).Google Scholar
Drotbohm, H and Hasselberg, I (2014) Deportation, anxiety, justice: New ethnographic perspectives. Journal of Ethnic and Migration Studies 41(4), 551562. http://doi.org/10.1080/1369183X.2014.957171CrossRefGoogle Scholar
Edwards, L and Harbinja, E (2020) ““Be right Back”: What rights do we have over post-mortem avatars of ourselves?” In Edwards, L, Schäfer, B and Harbinja, E (eds), Future Law: Emerging Technology, Regulation and Ethics. Edinburgh: Edinburgh University Press, pp. 262292.CrossRefGoogle Scholar
European Commission (2019) EU high-level expert group guidelines on trustworthy artificial intelligence. Available at https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence (accessed 22 July 2021).Google Scholar
European Commission (2020) Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment: Shaping Europe’s digital future. Available at https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment (accessed 22 July 2021).Google Scholar
European Commission (2021a) Migration whitepaper: A new approach to digital services for migrants. Shaping Europe’s digital future. Available at https://digital-strategy.ec.europa.eu/en/news/migration-whitepaper-new-approach-digital-services-migrants (accessed 22 July 2021).Google Scholar
European Commission (2021b) Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. COM(2021) 206 final. Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 (accessed 15 December 2021).Google Scholar
European Commission (2021c) Proposal for a regulation of the European parliament and of the council on general product safety. COM(2021) 346 final. Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0346 (accessed 15 December 2021).Google Scholar
Floridi, L and Cowls, J (2019) A unified framework of five principles for AI in society. Harvard Data Science Review 1(1). http://doi.org/10.1162/99608f92.8cd550d1Google Scholar
Floridi, L, Cowls, J, Beltrametti, M, Chatila, R, Chazerand, P, Dignum, V, Luetge, C, Madelin, R, Pagallo, U, Rossi, F, Schafer, B, Valcke, P and Vayena, E (2018) AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines 28(4), 689707. http://doi.org/10.1007/s11023-018-9482-5CrossRefGoogle ScholarPubMed
Genova, N (ed) (2017) The Borders of "Europe": Autonomy of Migration, Tactics of Bordering. Durham: Duke University Press.CrossRefGoogle Scholar
Genova, N and Roy, A (2020) Practices of illegalisation. Antipode 52(2), 352364. http://doi.org/10.1111/anti.12602CrossRefGoogle Scholar
González Fuster, G and Peeters, MN (2021) Person identification, human rights and ethical principles: Rethinking biometrics in the era of artificial intelligence. PE 697.191. Available at https://www.europarl.europa.eu/RegData/etudes/STUD/2021/697191/EPRS_STU(2021)697191_EN.pdf (accessed 27 July 2022).Google Scholar
Goodrich, J (2019) Standards Working Group Takes on Facial Recognition: Chair of IEEE Standards Association working group explains what the organisation is doing to help ensure the technology is used ethically. Available at https://spectrum.ieee.org/standards-working-group-takes-on-facial-recognition (accessed 27 August 2021).Google Scholar
Hao, K (2019) In 2020, let’s stop AI ethics-washing and actually do something. MIT Technology Review, 2019. Available at https://www.technologyreview.com/2019/12/27/57/ai-ethics-washing-time-to-act/ (accessed 17 December 2021).Google Scholar
La Fors, K (2016 ) Monitoring migrants or making migrants’ misfit’? Data protection and human rights perspectives on Dutch identity management practices regarding migrants. Computer Law & Security Review 32(3), 433449. http://doi.org/10.1016/j.clsr.2016.01.010CrossRefGoogle Scholar
La Fors, K (2020) Legal remedies for a forgiving society: Children’s rights, data protection rights and the value of forgiveness in AI-mediated risk profiling of children by Dutch authorities. Computer Law & Security Review 38, 105430. http://doi.org/10.1016/j.clsr.2020.105430CrossRefGoogle Scholar
La Fors, K and van der Ploeg, I (2015) Migrants at/as risk: Identity verification and risk-assessment technologies in the Netherlands. In van der Ploeg, I and Pridmore, J (eds), Digitizing Identities. New York: Routledge, 261281.CrossRefGoogle Scholar
Latour, B (1987) Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press.Google Scholar
Laursen, L (2022) Europe expands virtual borders to thwart migrants: Our investigation reveals that Europe is turning to remote sensing to detect seafaring migrants so African countries can pull them back. 04–02. Available at https://spectrum.ieee.org/remote-sensing-of-migrants (accessed 27 July 2022).Google Scholar
Leese, M, Noori, S and Scheel, S (2021) Data matters: The politics and practices of digital border and migration management. Geopolitics 11(3), 121. http://doi.org/10.1080/14650045.2021.1940538Google Scholar
Meissner, F (2017) Legal status diversity: Regulating to control and everyday contingencies. Journal of Ethnic and Migration Studies 44(2), 287306. http://doi.org/10.1080/1369183X.2017.1341718CrossRefGoogle Scholar
Meissner, F (2019) Of straw figures and multi-stakeholder monitoring – A response to Willem Schinkel. Comparative Migration Studies 7(1), 1452. http://doi.org/10.1186/s40878-019-0121-yCrossRefGoogle Scholar
Meissner, F and Vertovec, S (2015) Comparing super-diversity. Ethnic and Racial Studies 38(4) 541555. http://doi.org/10.1080/01419870.2015.980295CrossRefGoogle Scholar
Metcalfe, P (2021) Autonomy of migration and the radical imagination: Exploring alternative imaginaries within a biometric border. Geopolitics 180(3), 123. http://doi.org/10.1080/14650045.2021.1917550Google Scholar
Molnar, P (2020) Technological testing grounds: Migration management experiments and reflections from the ground up. EDRI, November Available at edri.org/wp-content/uploads/2020/11/Technological-Testing-Grounds.pdf (accessed 20 April 2021)Google Scholar
Molnar, P and Gill, L (2018) Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration and refugee system. TheCitizenLab, University of Toronto Available at citizenlab.ca/wp-content/uploads/2018/09/IHRP-Automated-Systems-Report-Web-V2.pdf (accessed 17 March 2021)Google Scholar
Oliveira Martins, B and Jumbert, MG (2020 ) EU border technologies and the co-production of security “problems” and “solutions”. Journal of Ethnic and Migration Studies 2012(3), 118. http://doi.org/10.1080/1369183X.2020.1851470Google Scholar
Paasi, A, Prokkola, E-K, Saarinen, J and Zimmerbauer, K (2018) Borderless Worlds for Whom? Abingdon, Oxfordshire Routledge.CrossRefGoogle Scholar
Pötzsch, H (2015 ) The emergence of iBorder: Bordering bodies, networks, and machines. Environment and Planning D: Society and Space 33(1), 101118. http://doi.org/10.1068/d14050pCrossRefGoogle Scholar
Powell, AB (2021) Undoing Optimisation: Civic Action in Smart Cities. New Haven: Yale University Press.Google Scholar
Privacy International (2021) Privacy International’s (PI) submission for the UN High Commissioner for Human Rights’ report on the right to privacy and artificial intelligence. Available at https://privacyinternational.org/sites/default/files/2021-06/PI%20submission%20to%20OHCHR%20Report%20on%20AI%20and%20Privacy%202021%20final.pdf (accessed 22 July 2021).Google Scholar
Robert, LP, Bansal, G and Lütge, C (2020) ICIS 2019 SIGHCI Workshop Panel Report: Human–Computer Interaction Challenges and Opportunities for Fair, Trustworthy and Ethical Artificial Intelligence. AIS Transactions on Human-Computer Interaction: 96–108. http://doi.org/10.17705/1thci.00130CrossRefGoogle Scholar
Scheel, S (2013) Autonomy of migration despite its securitisation? Facing the terms and conditions of biometric rebordering. Millennium: Journal of International Studies 41(3), 575600. http://doi.org/10.1177/0305829813484186CrossRefGoogle Scholar
Scheel, S (2021 ) The politics of (non)knowledge in the (un)making of migration. Zeitschrift für Migrationsforschung/Journal of Migration Research 1(2). http://doi.org/10.48439/zmf.v1i2.113Google Scholar
Scheel, S and Tazzioli, M (2022) Who is a migrant? Abandoning the nation-state point of view in the study of migration. Migration Politics 1(1). http://doi.org/10.21468/MigPol.1.1.002Google Scholar
Schinkel, W (2017) Imagined Societies. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Stuart, MT (2017) How thought experiments increase understanding. In Stuart, MT, Fehige, Y and Brown, JR (eds), The Routledge Companion to Thought Experiments. New York: Routledge, pp. 526544.CrossRefGoogle Scholar
Suchman, L (1993) Do categories have politics? Computer Supported Cooperative Work (CSCW) 2(3), 177190. http://doi.org/10.1007/BF00749015CrossRefGoogle Scholar
Taylor, L and Meissner, F (2019) A crisis of opportunity: Market‐making, big data, and the consolidation of migration as risk. Antipode 52(1), 270290. http://doi.org/10.1111/anti.12583CrossRefGoogle ScholarPubMed
Tebble, AJ (2020 ) More open borders for those left behind. Ethnicities 20(2), 353379. http://doi.org/10.1177/1468796819866351CrossRefGoogle Scholar
Uitsprak ECLI:NL:RBDHA:2021:10283. 22-09-2021. Court of The Hague. Available at https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2021:10283 (accessed 12 December).Google Scholar
van de Poel, I (2020) Embedding values in artificial intelligence (AI) systems. Minds and Machines 30(3), 385409. https://doi.org/10.1007/s11023-020-09537-4CrossRefGoogle Scholar
van der Ploeg, I and Sprenkels, I (2011) Migration and the machine-readable body: Identification and biometrics. In Huub, D and Albert, M (eds), Migration and the New Technological Borders of Europe. Basingstoke: Palgrave Macmillan, 68105.CrossRefGoogle Scholar
van Houtum, H (2012 ) The geopolitics of Borders and boundaries. Geopolitics 10(4), 672679. http://doi.org/10.1080/14650040500318522CrossRefGoogle Scholar
van Rossem, W and Pelizza, A (2022) The ontology explorer: A method to make visible data infrastructures for population management. Big Data & Society 9(1), 205395172211040. http://doi.org/10.1177/20539517221104087CrossRefGoogle Scholar
Veale, M and Borgesius, FZ (2021) Demystifying the draft EU artificial intelligence act: Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International 22(4), 97112. http://doi.org/10.9785/cri-2021-220402CrossRefGoogle Scholar
Verbeek, P-P and Tijink, D (2020) Guidance ethics approach: An ethical dialogue about technology with perspective on actions. ECP Available at: ecp.nl/publicatie/guidance-ethics-approach (accessed 19 April 2021)Google Scholar
Vigneswaran, D (2013) Territory, Migration and the Evolution of the International System. Basingstoke: Palgrave Macmillan.CrossRefGoogle Scholar
Zuboff, S (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, 1st Edn. New York: Public Affairs.Google Scholar
Figure 0

Figure 1. The three stages of the guidance-ethics approach (Verbeek and Tijink, 2020).

Submit a response

Comments

No Comments have been published for this article.