Introduction
Whether in Gaza for the purposes of identifying hostagesFootnote 1 or in Ukraine to identify the deceased or prisoners of war,Footnote 2 facial recognition technologies (FRTs) powered by artificial intelligence (AI) are increasingly being utilized on modern battlefields. Recourse to FRTs is part of a broader trend among many military forces of “leverag[ing] biometrics to establish ‘identity dominance’ over the enemy”.Footnote 3 The concept of biometrics can refer to both a process and a characteristic. It is understood as the automated recognition of individuals based on their behavioural or biological features, such as iris, gait, fingerprint, or face topography.Footnote 4 First rolled out on a major scale in a military context in Afghanistan over two decades ago as a force protection measure to counter a battlefield threat,Footnote 5 biometric recognition is now used for both defensive and offensive purposes.Footnote 6 While biometrics have reportedly been employed for targeting,Footnote 7 little is known about the use of FRTs for such purposes so far.Footnote 8 However, given that some States are already in possession of drones equipped with facial recognition software for target acquisition,Footnote 9 it is reasonable to expect that FRT-based targeting is around the corner.
Such an inevitability brings to the fore a question of FRTs’ compliance with the applicable international law norms, especially international humanitarian law (IHL) and international human rights law (IHRL). Whether for misconceptions of military decision-making processes or misunderstandings of how advanced technologies are actually deployed, the combat use of new technological tools in general, and those powered by AI in particular, continues to cause quite a stir. This contribution aims to counter the ubiquitous Terminator-esque narratives in the literatureFootnote 10 by offering a practice-based examination of the impact and risks that FRT-based targeting during armed conflict might have under public international law.
The article is structured as follows. To approach the issues at hand with evidence-based knowledge rather than sci-fi-anchored inclination, the first section provides an overview of the existing FRTs and the algorithms they are based on. The second section starts with a reiteration of the technological neutrality of IHL and continues with a reflection on the challenges in applying relevant IHRL norms, chiefly the right to privacy and the right to life, in the context of armed conflict in general and the conduct of hostilities in particular. The third section examines the practical implementation of FRTs in combat – that is, targeted killings. The fourth section offers some concluding remarks.
A few clarifying thoughts are needed before proceeding with the analysis. First and foremost, this article does not aspire to provide a comprehensive examination of the potential legal exposure that the use of biometrics as such, or even specifically FRTs, might generate for armed forces and the States they belong to.Footnote 11 In particular, it does not deal with data protection regulations, which differ greatly in various national and regional regimes;Footnote 12 we leave this aspect of military legal interoperability to other commentators.Footnote 13 Also not addressed in this article, but admittedly relevant for a wider inquiry into the subject, are various pre-deployment aspects of FRTs such as the data mining, collection and processing necessary for the designing and testing of a given software. Finally, while recognizing that in contemporary theatres, armed forces perform many functions, including law enforcement (in occupied territories, for example),Footnote 14 we focus on the use of FRTs in combat. This is an important distinction. Despite the increased tendency to cast (mis)judgement on the military utility and legality of FRTs based on the increasingly stringent regulation on the use of biometrics by law enforcement authorities in many jurisdictions,Footnote 15 both the legal framework and the operational reality of an armed conflict differ greatly from peacetime policing. While some analogies might admittedly be made between peacetime policing and law enforcement functions performed by the military, it is conceptually fallacious to extend such conclusions to the conduct of hostilities in armed conflict. It is the latter category that the rest of this article deals with.
Technological overview
“Facial recognition technologies” is an umbrella term denoting various technologies of differing levels of advancement. In fact, different layers of facial scanning technologies exist, with each one having different purposes and outcomes. The simplest layer is facial detection technology,Footnote 16 used, for example, by cameras to focus on faces when a picture is taken.Footnote 17 The second layer is emotion detection, which enables commercial companies to analyze facial expressions and infer emotions, and often incorporates additional features to collect basic demographic data, such as gender and age, about their potential clients. The third, most advanced layer is facial recognition technology,Footnote 18 based on the collection of information through a point-based design that analyzes a person's facial structure. The information about each person is collected and stored in a database for comparison with future data. While facial recognition is considered to be the least accurate biometric (in comparison with iris recognition, fingerprint identification and gait recognition), it is often used because it can be collected from a distance and does not require active participation by the subjects being analyzed.Footnote 19 As such, it is arguably particularly apt to be used for lethal targeting, as the enemy does not need to be aware that his adversaries own his biometrics and can use them for attack.
The process of using the biometric point-based system is composed of four main steps. The first is finding face(s) within a photo or a video sequence (both can contain more than one face). The second step is to extract the available data about the obtained face, and the third step is to process that data into the biometric point-based system and create a mathematical formula called a “faceprint”. The fourth step is to compare the new faceprint with an existing database.Footnote 20 This process also enables data sharing between different databases, which is sometimes referred to as a “ping and ring” mechanism:Footnote 21 one operator conducts an inquiry with other operators (such as different government agencies or equivalent agencies in different countries) to inquire if the faceprint is available on their specific databases (ping), and if they provide a positive answer, the operator contacts them directly (ring) to ask for the relevant data.Footnote 22
From a technical point of view, FRTs refer to AI algorithms that can identify individuals from images and videos through the analysis of facial features.Footnote 23 Such systems have been in use in different industries since the 1960s, but it wasn't until the early 2000s that this technology made the leap into commercial, governmental and security use.Footnote 24 Throughout the years, FRTs and the algorithms that operate them have significantly developed. One of the major developments has been the change from a single-mode approach – which was usually based on comparison with a single photo – to a multi-source approach. The multimodal approach uses different machine learning algorithms, such as deep learning, support vector machines and decision trees, to analyze, interpret and compare data from multiple and different modalities (such as images, video and audio) in order to conduct facial recognition.Footnote 25 As a result, the speed of recognition has increased tremendously,Footnote 26 and at the same time, the success rate of facial recognition systems can reach over 99% in ideal conditions – i.e., when comparing designated photos taken for the purpose of facial recognition.Footnote 27 However, when it comes to photos of lower quality, such as when either or both photos were taken in motion or without a clear background, the recognition error rate can reach 20% and above.Footnote 28
The meaning of the error rate includes, inter alia, the problems of false positives and false negatives. A false positive means that the FRT will alert the user that the two photos which are being compared are similar even though they are not. This would lead to the false identification of one person as someone else. A false negative relates to a situation where the system fails to identify a specific person. To minimize the risk of false positives, some algorithms have a confidence threshold that will only return a positive result if the analysis leads, for example, to 99% certainty.Footnote 29 The cost of using such a safety mechanism, however, is a significant increase in false negative results.Footnote 30 A very colourful example of the importance of the confidence threshold can be seen in a facial recognition experiment run by the American Civil Liberties Union (ACLU). In the experiment, the ACLU used Amazon's Rekognition FRT and found that twenty-eight members of the US Congress, out of a total of 533 members, were wrongly matched with mugshots of people who had been arrested.Footnote 31 In response, Amazon published that the ACLU had used a confidence threshold of 80% and not 95% as was recommended for law enforcement activities.Footnote 32 As far as the authors are aware, no unclassified military information on the threshold of confidence required for battlefield employment of FRTs is available in the public domain.
Lastly, it is worth distinguishing between biometric identification, which enables one to identify a person apart from others (finding a specific strand of hay in a haystack), and biometric authentication, which enables one to confirm the identity of a specific person, much like when someone opens their computer or phone using a facial recognition protection system.Footnote 33 From an operational perspective, identification would be the form of facial recognition apt for targeted killings of civilians directly participating in hostilities, while authentication would be a particularly useful tool for targeted killings of combatants (see discussion below).
Legal and conceptual framework
IHL and new technologies
One of the prominent features of the ongoing debate on emerging disruptive technologies is the pervasive conflation of legal and ethical standards.Footnote 34 The latter, often disguised under the pretence of the protection of human dignity, usually underpin appeals to ban a given technology.Footnote 35 While not dismissing such considerations out of hand, it is our position that they do not, as such, influence the legal assessment of the potential use of a given technological tool. IHL is already built on a delicate balance between humanitarian concerns and military necessity,Footnote 36 and it is neither necessary nor practical to further convolute the examination of a given means or method of warfare with abstract, broadly conceived deontological concerns. To wit, it is our position that while IHL factors in humanitarian imperatives, it is technology-neutral and can be applied “effectively and fairly in different technological contexts”.Footnote 37
Another manoeuvre frequently used by opponents of “militarizing” a given technological tool is grounding the argument in a fictional case study, which often aggrandizes the capabilities of a given tool and/or sets it in a clearly unlawful context.Footnote 38 Such approaches might be intellectually entertaining, but they fail to advance the conversation on the legal reverberations of military technologies. The fact that one can imagine a situation in which an item of military equipment is used in breach of IHL does not make the equipment unlawful. In fact, most, if not all, military equipment can be used in ways that are not in compliance with the most fundamental principles of IHL.Footnote 39 In other words, in the case of the technology at hand, one could inquire whether, for example, the use of an anti-personnel autonomous weapon system capable of acquiring and engaging targets based solely on a positive identification from an FRT, with no further restriction on the temporal or geographical limits and no option to abort, complies with IHL, but this question, framed in this way, would be highly abstract and divorced from the reality. In practice, armed forces take advantage of biometrics as “a complementary source of information to build the layers of knowledge and insights into the individual of potential interest”.Footnote 40 Consequently, a hard-headed examination of a given tool's impact and risks should focus on the tool's normal or expected uses, taking into account that in practice, in the majority of circumstances, FRT deployment would augment a combatant's decision-making rather than serving as the only source of targeting intel.Footnote 41 As the discussion below demonstrates, especially in the context of targeted killings, it is not inconceivable that facial recognition might be compliant with IHL if other conditions are met.
Furthermore, a pragmatic reflection on the combat use of FRTs requires a determination of the factual circumstances in which depriving the enemy of anonymity – which is considered by some intelligence experts to be “the most powerful weapon on earth”Footnote 42 – provides the armed forces with the upper hand.Footnote 43 Note that the power of anonymity comes into play mainly in conflicts with an asymmetric element. In a textbook international armed conflict, the identity of the members of the opposing forces, as distinguished from the civilian population via a uniform or a fixed distinctive sign recognizable at a distance,Footnote 44 is generally legally and operationally irrelevant.Footnote 45 Consequently, it can be expected that FRTs are likely to be used in two conceptually overlapping use cases: targeted killings and directing attacks against persons directly participating in hostilities. Before examining these use cases in turn in the next section, a few clarifying words on the right to privacy and its application in active hostilities, as well as on the implications of the use of FRTs on the right to life, are necessary.
The right to privacy in armed conflict
IHL might be the main body of law governing armed conflicts, but it is not the only one; it is by now largely uncontroversial that IHRL does not cease to apply in times of armed conflict.Footnote 46 As noted by Noam Lubell and Nancie Prud'homme, “[t]he existence of a relationship between [IHRL] and [IHL] is now widely accepted. Their concurrent application is at present more or less a fait accompli, but there remain debates on the nature of their interaction.”Footnote 47
Revisiting the various models of IHL and IHRL interplay is beyond the scope of this article.Footnote 48 What matters from the military operational perspective is that FRTs are, in fact, fairly intrusive, and their deployment does interfere with the privacy of the local population.Footnote 49 However, it does not automatically mean that a party to the conflict deploying FRTs to augment its targeting decision-making violates its international obligations. This is because IHL protects (some facets of) the protected persons’ privacy only in the context of detention (and arguably occupation),Footnote 50 and no IHL provision touches upon the privacy implications of the conduct of hostilities.
In turn, whether or not the IHRL right to privacy applies depends, in the first place, on where the attack is taking place. Extraterritorial application of IHRL is another vexed topic, a detailed examination of which is again beyond the scope of this contribution.Footnote 51 For the purposes of the present discussion, it suffices to underline that in light of current international case law, it can reasonably be argued that if a given territory was not controlled by a party to a conflict prior to the conflict, active hostilities preclude the establishment of effective control by the State over an area which would trigger IHRL applicability.Footnote 52 In other words, if an attack is conducted against a target outside of a State territory in “the context of chaos” characteristic for active hostilities, the right to privacy does not apply, as the so-called personal model of jurisdiction based on “State agent authority and control” cannot be established either.Footnote 53
When the attack is taking place in the territory under the effective control of a party to the conflict, the right to privacy does apply, but it should not be read to imply that the combat use of FRTs necessarily breaches a State's IHRL obligations. The right to privacy, whether customary or treaty-based,Footnote 54 protects the local population only against arbitrary (and/or unlawful) interferences.Footnote 55 What is arbitrary/unlawful in armed conflict ought to be interpreted first and foremost in light of IHL,Footnote 56 which, as will be further discussed below, prioritizes doing everything feasible to verify whether the target is a military objective over the privacy rights of the local population. Many theories have been put forward as to what principles need to be met to ensure that the interference with the right to privacy is not arbitrary,Footnote 57 but all recognize that compliance with their recommended principles is always context-specific.Footnote 58 As Daniel Bethlehem has aptly observed, “[a]s a rule of thumb, the closer one gets to the battlefield, the less amenable to reasonable application are most provisions of [IHRL]”.Footnote 59 This is definitely the case for non-absolute rights with built-in exceptions, such as the right to privacy.
The right to life and the use of FRT-based targeting
Another fundamental human right that needs to be mentioned in the context of FRTs and targeting is the right to life. Even if we reject the alarmist approaches calling for a complete ban on the use of AI and FRTs on the battlefield, some points of concern remain. The right to life, enshrined, inter alia, in Article 6 of the International Covenant on Civil and Political Rights, says that “[e]very human being has the inherent right to life. This right shall be protected by law. No one shall be arbitrarily deprived of his life.”Footnote 60 In its General Comment 36, published in 2019, the United Nations (UN) Human Rights Committee noted in the context of the application of that right during armed conflicts that “[u]se of lethal force consistent with international humanitarian law and other applicable international law norms is, in general, not arbitrary”.Footnote 61 This raises an important question: can a mistaken strike – that is, an attack against a person mistakenly classified as a military objective – be classified as inconsistent with IHL and therefore arbitrary?
In the context at hand, a mistaken strike would likely be a consequence of a false positive – i.e., incorrect recognition – as described in the technological overview earlier in this article. In the context of an armed conflict, when the adversary combatants are anonymous, and they are targeted for their status and not for their personal identity, a false positive should not make much of a difference. However, when identity is an element of the targeting process, such as in the case of targeted killings (see further discussion below), false positives can lead to attacking the wrong person. If that person is still a lawful target as a combatant or a civilian taking direct part in hostilities, there is no prima facie violation of IHL, and thus, in general, no violation of the right to life.Footnote 62 A situation is legally more problematic when the person directly attacked was not a lawful target,Footnote 63 leading ostensibly to the violation of the principle of distinction, pursuant to which “[t]he civilian population as such, as well as individual civilians, shall not be the object of attack”.Footnote 64 While the level of certainty required under the principle of distinction is subject to lingering debate in scholarship,Footnote 65 it is generally accepted that absolute certainty regarding whether the target is a military objective is not required.Footnote 66 A determination of whether a given mistaken engagement is unlawful hinges on compliance with the obligation to do everything feasible to verify that the target is a military objective.Footnote 67 There is no doubt that the obligation to verify targets is an obligation of means, not of result, but it could be argued that employing an insufficiently unreliable FRT violates a duty of care,Footnote 68 and a mistaken attack based on a false positive identification should be considered indiscriminate and thus arbitrary.
This does not mean, however, that an FRT cannot be used for targeting if it is less than 100% accurate; as will be discussed in the next section, there are several safety mechanisms to reduce such risks.
FRT-augmented targeting: The use cases
When it comes to the use of FRTs for targeted killings as a method of warfare, the discussion focuses on the fundamental principles of IHL and especially distinction and precautions with regard to the decision to use force against an individual. The use of targeted killings has become a common practice either within armed conflicts or as a tool for self-defence,Footnote 69 usually extraterritorially. Although the notion of “targeted killings” does not appear in either codified or customary IHL, it is a typically defined as “the intentional, premeditated, and deliberate use of lethal force against a specific individual who is not in the physical custody of the perpetrator”.Footnote 70 In simpler terms we can refer to this as “identified targeting”. Therefore, intentionally attacking an individual based on facial recognition aligns precisely with this definition, thereby raising significant questions about compliance with IHL when the action takes place within the framework of an armed conflict.
Attacks against human targets
When it comes to the conduct of hostilities, and especially targeting, IHL is very straightforward. Direct attacks can only be conducted against a person who constitutes a lawful target – either a combatant or a civilian taking direct part in hostilities.Footnote 71 What exactly amounts to direct participation in hostilities (DPH) remains perhaps one of the most controversial concepts in contemporary IHL doctrine and practice.
In the absence of any treaty definition or uniform State practice supported by opinio juris, the International Committee of the Red Cross's (ICRC) 2009 Interpretive Guidance on the Notion of Direct Participation in Hostilities (ICRC Interpretive Guidance),Footnote 72 in concert with its ensuing critiquesFootnote 73 and the responses thereto,Footnote 74 is widely considered a legal touchstone on how to interpret DPH in contemporary counter-insurgency operations. Broadly speaking, the ICRC Interpretive Guidance distinguishes between “sporadic” DPH (resulting in the temporal scope of loss of protection during “the execution of a specific act of [DPH], as well as the deployment to and the return from the location of its execution”) and “continuous combat function” (CCF)Footnote 75 resulting in the loss of protection due to one's status in an organized armed group and their role in it.Footnote 76 While the distinction between the two categories has been subject to fierce criticism, and the notion of CCF was originally restrained to non-international armed conflicts, the prevailing view nowadays appears to be that members of an organized armed group (or an armed wing of a terrorist organization) may be targeted similarly to members of State armed forces.Footnote 77 Put differently, the operational reality of counter-insurgency has stimulated an evolution of DPH from a purely conduct-based notion into a more fluid mix of one's conduct and status. That said, irrespective of the DPH interpretation adopted, parties to the conflict are obliged to do everything “feasible”Footnote 78 to verify that targets are military objectives before launching an attack.Footnote 79 Compliance with the obligation to do one's best to verify the nature of the target, just like all facets of the principle of precaution, is therefore closely tied to the collection and analysis of information about potential targets.Footnote 80
How do FRTs fit into this matrix of norms? From a tactical operational perspective, augmenting combatant decision-making with insights from FRTs offers obvious benefits:
[T]he use of FRT to scan a crowd of faces and run those images against a database of known combatants and non-combatants could significantly enhance operational effectiveness and ensure compliance with IHL. Not only would it allow for the more efficient use of violence, but FRT deployment could also augment a soldier's decision-making and save the lives of innocent civilians.Footnote 81
Such reasoning, however, does not hold in cases of “sporadic” DPH, in which one's identity is irrelevant and only their conduct matters. But it does work very well for status-based interpretations of DPH, when a party to the conflict has prior knowledge of one's membership in an organized armed group and is in possession of other intelligence suggesting that they constitute a threat. This is arguably a widespread practice among many States engaged in counter-insurgency, chiefly the United States and France,Footnote 82 and the increasing use of FRTs for targeting is likely to make it sprawl further. While debates concerning target selection resting on the objective of pre-empting threats continue, it needs to be noted that such practices are not free of pitfalls. Crucial among these is the blurring of the line between individuals who continue to pose a threat to a party to the conflict and those who did engage in hostile actions before but are no longer a threat and are instead attacked on a punitive basis, which would be incongruent with IHL.
FRT-based targeted killings
Some discussions have taken place about the potential requirement to capture instead of killing when the option is available,Footnote 83 but to focus on the specific question of the use of FRTs and its compliance with the principle of distinction, in this subsection we are going to continue under the following four basic assumptions:
1. the designated target constitutes a lawful target;
2. there is an imminent necessity to neutralize the targeted person;
3. no excessive collateral damage is expected from the attack; and
4. no less-than-lethal alternative would neutralize the threat posed by the individual.
In such admittedly ideal circumstances, the crux of the operation lies in the positive identification of the target. In practice, this can be done by human agents through visual confirmation (which often includes putting those agents at substantial risk) or by recourse to FRTs (authentication function). This aspect of the process raises a few important questions that require further discussion. What is the importance of having a “person in the loop” during an FRT-augmented targeting operation? How should a potential false positive result be factored in? What are the implications of a mistaken identity in the course of an FRT-based targeted killing operation?
The following two scenarios are meant to clarify the relevant theoretical and practical questions about targeted killing operations that are based on FRTs.
• In scenario A, an elite unit is deployed with an operation to target person X. An unmanned aerial vehicle (UAV) with a camera scans the area and uses an FRT to authenticate the target. The unit's commander gives the order to engage and neutralize person X.
• In scenario B, an assault UAV is deployed with an operational order to target person Y when she is alone in a secluded location. The UAV scans the area and uses an FRT to authenticate the target. On the basis of that confirmation, the UAV system attacks and kills person Y.
The main difference between these two scenarios is the human factor. In scenario A, the final decision to launch the attack is taken by the unit on the ground, and in scenario B, the UAV operates as a highly automated weapon.Footnote 84 In both scenarios, the decision to attack is based on the positive identification by the FRT. If, in scenario A, the unit operates immediately without any additional confirmation, there is no difference between the scenarios from a legal perspective. However, if the unit has the option to corroborate the result provided by the FRT, this introduces a safety mechanism in the process of identification. Another way to include a person in the loop of the decision-making process is to condition the positive approval with another safety mechanism of human confirmation. This might reduce the chances of false positives, but at the same time, it will increase response time between the positive identification and the execution of the attack. When we consider that human-level facial recognition performance is at 97.53%,Footnote 85 and that non-ideal conditions increase the error rate, the importance of using a person in the loop becomes even more significant to reduce both false positives and false negatives.
But what happens when we part from that ideal scenario and consider that under the fog of war and combat fatigue, combatants are substantially more prone to making mistakes?Footnote 86 FRTs, like other advanced technologies, are immune to such factors. One must wonder if, under adversarial conditions, the reliance on FRTs, especially with a high enough confidence threshold, would not be more beneficial than relying on the person in the loop to make the final call.
Another way to look at the verification process can be on the side of the system. As was noted in the technological overview earlier in this article, some FRTs include safety mechanisms to reduce the chances of false positives.Footnote 87 In the context of the two targeting killing scenarios presented above, there are two potential safety mechanisms to prevent false positive as well as false negative results. One safety mechanism is the combatants executing the attack in scenario A after the information from the FRT has been transmitted to them. The question of adding such a safety mechanism becomes complicated, however, when we consider the risk that the combatants might have to take upon themselves in order to verify the identity of the target and, at the same time, the level of the immediate threat posed by the target, as well as the likelihood of another opportunity to engage the target presenting itself. Naturally, if the threat is not immediate and the next chance to engage the target is on the horizon, even though that person might still constitute a lawful target, the operational decision might have to be to cancel the attack because of the doubt about the identity of the target,Footnote 88 in order to avoid directly attacking a civilian. Another possible safety mechanism could be fixed at the comparison point, before the information is transmitted for execution. If, at that point, there were another way to double-check the decision, especially when the compared data is not of the highest quality, it would allow for a reduction in the chances of false positives and negatives. Such a safety mechanism could work for both scenarios. On the other hand, if the compared data is of high value and the confirmation rate is higher than the human recognition rate (i.e., it is over 97.53%), there seems to be no need for such a safety mechanism, especially if the timing of the attack is crucial. (The timing element could serve an important aspect as part of ensuring that “everything feasible” is done to ensure that the target is indeed legal while taking into account the practical circumstances at the time of the attack.Footnote 89)
Conclusions
Facial recognition is a rapidly developing technology, and it is expected that it will continue to evolve in the coming years. At the time of writing this paper, the technological ability to identify or authenticate a person's face through biometric means can be very accurate in ideal conditions but less than reliable in field conditions. The use of FRTs by armed forces for targeting purposes does not violate prima facie any rule of IHL; similarly to other advanced technologies, the crucial question is how FRTs are employed.
FRTs have the potential to increase compliance with the principle of distinction. This holds true in regard to targeted killings of both combatants and status-based civilians who are directly participating in hostilities. When used properly, the battlefield deployment of FRTs can reduce the risks to both adversary civilians and the attacking combatants.
From the perspective of compliance with IHL, the use of FRTs brings up several challenges. It is the position of the authors that the main legal risk that FRTs generate actually precedes deployment and lies with the collection and processing of personal data. As such, it is not regulated by IHL, with the exception of Article 36 of Additional Protocol I.Footnote 90 From an operational, user's perspective, the core objection to FRTs relates to their (un)reliability in an uncontrolled environment. Such reliability depends on two crucial aspects: the system's ability to provide a positive identification, and the ability to prevent or flag situations of false positive/negative results in suboptimal field conditions. The problems of both false positives and false negatives are different in their basis but similar in the outcome: misidentification of the target. Without proper safety mechanisms, false positives can lead to the targeting of the wrong person – either a lawful target (e.g., a different combatant or a civilian directly participating in hostilities) or an uninvolved civilian, which in the latter case might also constitute an arbitrary deprivation of life. False negatives, on the other hand, can lead to missing a chance to neutralize a desirable target, which is usually a person of significant interest. The main (and at this point of technological development, probably also the only) safety mechanism that can mitigate reliability concerns is to include a human in the loop. The human can be at the comparison point (i.e., before the approval is transmitted), at the execution point (after the approval has been transmitted), or both. This will not necessarily prevent any mistakes completely, but it will ensure that the human recognition rate of 97.53% will be considered as part of the decision-making process. In any event, the use of FRTs will help to reduce the risk to one's own combatants while also increasing protection to uninvolved civilians who might otherwise have been harmed in the course of a larger operation (even when the collateral damage would not be considered excessive) to confirm the identity of the targeted person.
An argument can be made that such an assessment will change once FRTs develop and can provide a rate of confirmation higher than the human one, even in suboptimal field conditions. Should that happen, there will be no logical reason to rely on human confirmation as preferable to technological confirmation. Therefore, the use of an assault UAV which can conduct a targeted killing operation against an identified target without excessive collateral damage could be conducted without any human interference, allowing for a much quicker and more accurate operation without the need to risk the life of one's own forces. Moreover, in case there are more persons alongside the designated target, FRTs could be used to identify them and to clarify whether they are also lawful targets (if, for example, they are directly participating in hostilities), and in such a case, since there is no expected collateral damage to human life, and thus no blatant proportionality considerations, engage the target. This, of course, will not be applicable when some of the persons in the vicinity of the designated target are uninvolved civilians or when the FRT is unable to confirm their identity, and there is therefore doubt about their classification.
The possibilities of incorporating biometric technologies on the battlefield are wide and diverse. In this paper, we have focused on the use of FRTs to recognize persons on the battlefield for the purpose of targeting. However, using FRTs, the future might also include more sophisticated options, such as identifying DPH behaviour.Footnote 91 As long as the technology is reliable enough, there is no reason to fear it. In the long run, and if used properly, it can lead to a significant decrease in casualties on the battlefield. However, we must finish with a word of caution – technology is a user-sensitive tool, and if used recklessly, it will create more damage than value.