Introduction
“Innovate or die.” In a recent article, Eric Schmidt, the former Google chief executive officer (CEO) and senior adviser to the US government, explained how this tech company mantra has become the “defining force” of international politics.Footnote 1 According to him, the ability to “invent, adopt and adapt” new technologies is what defines power in an increasingly competitive international environment.Footnote 2 It is indeed difficult to ignore how new technologies such as artificial intelligence (AI) have become a central element of the global competition between States and between private sector actors, and how they are transforming societies at the political, socio-economic and cultural levels.Footnote 3
This “innovate or die” paradigm is a logical consequence of the so-called “Fourth Industrial Revolution”Footnote 4 that has placed technological innovation at the centre of a “new chapter in human development”.Footnote 5 The constant emergence and integration of rapid innovation in computer and data sciences, bio- and neurotechnologies, robotics and other domains is profoundly influencing political agendas and the strategies that States and private sector actors use to develop, compete and survive.Footnote 6 Technological innovation is also changing how people relate to one another, how they work and organize their lives, and how they exercise their fundamental rights. The “digital transformation”Footnote 7 of everything has become a defining feature of humanity in the twenty-first century.
This global trend is also visible in the humanitarian sector. Over the past years, many international humanitarian organizations have digitally transformed and placed technological innovation high on their development, investment and partnership strategies.Footnote 8 Data and digital technologies have proliferated in these organizations’ operational toolbox and have become standing issues in humanitarian legal and policy debates. In a sector faced with unabated humanitarian needs, chronic funding gaps and the emergence of new actors,Footnote 9 competition for funds continues and innovation is gaining weight as a comparative advantage – turning Mr Schmidt's warning into a reality.
But innovation can also kill and harm. The continuously increasing data flows informing humanitarian responses can be repurposed to surveil people, including at-risk and marginalized groups, exposing them to further risks of targeting or persecution.Footnote 10 Digital exclusion and algorithmic biases can exclude hard-to-reach and off-grid communities from the protection and assistance they need to survive conflicts.Footnote 11 Millions of euros that could have been used for actual life-saving responses have instead been invested in humanitarian innovation programmes in what have become zero-sum-game humanitarian budgets,Footnote 12 where organizations can have to choose between delivering operational solutions and increasing their ability to do so through investing in innovative technologies. In the humanitarian sector, a fear-of-missing-out competition-driven innovation obsession inspired by the private sector's techno-logic is not only unlikely to get an organization ahead of the competition, but can also lead to catastrophic consequences for the lives and dignity of people affected by conflict and other humanitarian crises when it is not thought through carefully and critically.Footnote 13
The digital transformation triggers difficult questions for humanitarian actors in a global context of multiplying conflicts and digitally fuelled political polarization and societal upheavals. On the one hand, digitalization has made aid faster and more efficient.Footnote 14 Among other innovative technologies, telemedicine provides the possibility of bringing advanced medical expertise to hard-to-reach areas where it is needed to assist the victims of severe war-related injuries.Footnote 15 Digital cash transfers are lowering the cost of economic assistance programmes while supporting the autonomy and agency of those who need such assistance.Footnote 16 Data analytics and AI solutions are helping collect and analyze more information to support humanitarian decision-making.Footnote 17 In short, digital technologies and data flows have the power to transcend geographical borders and obstacles to help reach people at speed and at scale, while reducing the cost attached to bureaucratic and logistical constraints.
On the other hand, digitalization is making humanitarian action more risky for the security of the people that it is meant to protect by exposing their personal data to potentially malicious actors who can use it to target or persecute them. It is also making aid more opaque and less human, fuelling accusations of “surveillance humanitarianism”Footnote 18 and “techno-colonialism”Footnote 19 and generating daunting practical and ethical challenges for practitioners.Footnote 20 It is a double-edged sword that can turn against those who pursue it without understanding it.
These conundrums are not new. Academics have been highlighting the need for a critical research agenda for years,Footnote 21 and for new policy tools to inform and maintain an ethical approach to the use of digital technologies in humanitarian action. Since then, a multitude of guidelinesFootnote 22 have emerged to help humanitarians manage the tensions that digital opportunities and risks can create. But policy-makers and practitioners continue to lag behind innovation, trying to adapt to increasingly fast technological developments and identifying risks reactively as they materialize. The disconnect between theory and practice is gradually increasing and turning into a potentially dangerous game of digital whack-a-mole. Indeed, most organizations do not have the means to balance the potential rewards that technology brings with the need to mitigate the digital risks involved, and to put into practice, effectively and at scale, their ethical commitments to “do no digital harm”.Footnote 23 The innovation race and the continuously growing and pervasive digitalization of humanitarian activities is widening the gap between theory and practice, and increasing related risks for affected populations and principled humanitarian action.Footnote 24
Today, humanitarianism is at a critical juncture, and in need of a compass to help navigate the many quandaries of the digital transformation. AI hype is accelerating these problems, and the need to address them. Mr Schmidt's simplistic innovation-driven approach is not the right way to address these digital humanitarian dilemmas.Footnote 25 Humanitarian settings and front lines are more complex – and dangerous – to manoeuvre than the competitive environment of Silicon Valley. Instead, humanitarians should look to the fundamental principles that underpin humanitarian action – humanity, impartiality, neutrality and independence, known collectively as the humanitarian principles – to find their own ways to manage those risks and challenges, and to mitigate their negative consequences.Footnote 26
The humanitarian principles have been critical tools for confronting humanitarian challenges across time and space. They have demonstrated their added value as an analytical prism, ethical compass and operational tool for thinking critically and pragmatically about ways around the obstacles that political and conflict realities create. They can and should continue to do so in the digital age,Footnote 27 even if the framework they provide does not guarantee easy or perfect solutions. Managing dilemmas is about choosing the least bad option between two morally imperfect solutions, but without systematic evidence-based approaches and adequate ethical frameworks, decisions are left to the mercy of circumstances or unproductive or contentious debates based on the personal preferences and biases of those involved.Footnote 28 Structuring strategic and operational decision-making processes on innovation and the use of new technologies through a principled framework can help avoid these meeting-room traps.
A principled framework for decision-making starts with rejecting the binary framing that often characterizes debates on innovation and new technologies – i.e., enthusiasm, depicting them as the panacea for the challenges of the future, versus pessimism, portraying them as existential threats. It also requires deconstructing private sector assumptions attached to innovation and digital partnerships, and using the humanitarian principles to help design rights-based solutions that respect and advance the safety and dignity of populations affected by conflict and humanitarian crises, while preserving the essential elements of humanitarian action.
This article seeks to demonstrate how humanitarians can and should use the humanitarian principles of humanity, impartiality, neutrality and independence to approach innovation and new technologies. To do this, it first outlines the problems that the assumptions attached to the digital transformation pose for humanitarian action. The subsequent four sections address in turn how digitalization impacts the ability of humanitarians to operate in line with the requirements of the humanitarian principles. The conclusion proposes orientations to better integrate the principles into humanitarian strategies, policies and practices, in order to ensure a more responsible approach to digital innovation.
Digitalization and the “shifting problem definition”Footnote 29
The current majority approach to digital transformation in the humanitarian sector seems overly driven by considerations of convenience and organizational interests that are not necessarily aligned with the needs of populations affected by humanitarian crises. Asking critical questions about innovation practices in the humanitarian sector should not be mistaken as an opinionated Luddite rejection of new technologies. Innovation has, without a doubt, had a positive impact on humanitarian action and the ability to alleviate the suffering and enhance the agency of victims of conflict and other humanitarian crises. Across the spectrum of prevention, mitigation, preparedness, response and recovery efforts,Footnote 30 and from digitalized humanitarian logisticsFootnote 31 to AI-informed medical diagnostics,Footnote 32 new technologies and innovation have always been key to humanitarian action and its progress.Footnote 33
As already highlighted by others, the conversation is mostly ethical, and it needs attention. It is about trying to take a step back, “moving from a discussion of what technology does for humanitarian action to asking what technology does to humanitarian action.”Footnote 34 It is about ensuring that humanitarians keep a cool head vis-à-vis technological determinism and that they are in a position to use innovation responsibly, in line with the interests of the populations they serve, and with their mandate and values. This necessary but difficult task requires a slower time frame and sequencing that is at odds with the innovation dynamic and the speed of technological progress – this explains why it is neglected, but also why it is urgent.
Technology is not neutral
First, humanitarians need to explicitly acknowledge and integrate the fact that innovation and technology are neither neutral nor necessarily good in and of themselves.Footnote 35 They constitute complex socio-technical constructs carrying underlying but significant assumptions and values, usually aligned with those of the people who develop and promote them.Footnote 36 The “digital transformation” is, for instance, not a mere factual description of a trend, but also an agenda for change that comes with implicit but important structural shifts reflecting neoliberal and capitalist orientations.Footnote 37 It is shaping and shaped by society and by political and economic interests, and it triggers serious consequences and impacts.Footnote 38 Neglecting these considerations comes with significant risks for humanitarian action.Footnote 39
Academics have already highlighted how these structural shifts are impacting the humanitarian sector. Its centre of gravity is moving from a focus on mostly physical and human methods to the technological and digital, from largely public and non-profit to hybrid and commercial approaches, and from a central role of States and governments to the growing role of private sector actors.Footnote 40 Humanitarian organizations are recruiting more data scientists and AI experts, but fewer anthropologists and ethnologists. They are developing public–private partnerships, but are less at ease with civil society organizations. Their donors are requesting more “value for money”,Footnote 41 but not necessarily human rights impact assessments.Footnote 42
The consequences of those shifts are contributing to what has been described as a “privatization”,Footnote 43 “commodification”Footnote 44 and “marketization”Footnote 45 of the sector. This evolution is reflected in the changing vocabulary of humanitarian professionals.Footnote 46 It has become relatively common to hear discussions on improving “productivity” (instead of impact) through innovation and ensuring “scalability” (instead of relevance). People affected by conflict and other humanitarian crises have over time turned into “customers” and “clients” for humanitarian “services”.Footnote 47 Donors request “returns on investment” and private sector partners offer expertise in leveraging market opportunities for “social good”.Footnote 48 Analyzing this lexicon is interesting because it reveals the assumptions and dynamics that come with it – namely, a focus on perceived efficiency and measurable outputs at the cost of qualitative humanitarian outcomes, and on market-based commercial strategies at the cost of needs-based humanitarian approaches.
Humanitarians have a lot to learn from private sector actors in terms of efficiency and the ability to deliver on commitments. Partnering with private companies can help improve effectiveness and management practices, and humanitarian organizations have been notoriously and legitimately criticized for their failures in these domains.Footnote 49 But these partnerships do not always work two ways, and humanitarian actors are often more impacted by the transfer of knowledge and values from the private sector than the other way around. This is particularly true in the field of innovation and new technologies.Footnote 50
It is argued that “while ‘technology’ and ‘the private sector’ have both been constant entities in the humanitarian sector”, their significant influence in humanitarian innovation “represents something qualitatively new”,Footnote 51 “changing the very nature of humanitarianism”.Footnote 52 The assumptions integrated in tech companies’ business strategies and practices can have a transformative impact on humanitarian ethics and practice.Footnote 53 Private tech companies’ utilitarian approach to humanitarian partnerships is understandable because the partnerships represent good branding, visibility, corporate social responsibility and new market entry possibilities, among other incentives.Footnote 54 This approach, however, comes with supply-driven opportunistic and experimental methodologiesFootnote 55 that do not necessarily align with the needs-driven and precautionary ones that ought to characterize humanitarian practices.Footnote 56
“Techno-solutionism” and utilitarian approaches
Tech companies, by their purpose and nature, function on the business- and profit-driven assumption that technological innovations are intrinsically good and can fix all sorts of human and societal problems. This so-called “techno-solutionism”Footnote 57 is difficult to resist for humanitarian organizations faced with unbearable levels of suffering, intractable needs and limited resources in particularly fluid and insecure environments. There is therefore high humanitarian receptivity for “tech-utopianism”Footnote 58 and opportunistic solutions to alleviate suffering.Footnote 59 In simple terms, the prevailing attitude is: if tech can help, let's use it.
This pragmatism is explicit in the humanitarian narrative vis-à-vis emerging technologies, which focuses on leveraging the opportunities they create while mitigating the risks they bring, in order to help “more” people.Footnote 60 This approach seems a priori adapted to the double-edged nature of digital technologies. Utilitarian ethics are neither unfamiliar nor illegitimate in humanitarian action: prioritizing solutions that help improve the situation of as many people as possible makes sense. The problem is that utilitarian ethics alone are insufficient in the humanitarian context.Footnote 61 Deontological, value-based and professional ethics are important complementary guard-rails against overly pragmatic choices – and this is where the humanitarian principles, taken as an interdependent whole and in hierarchical order, are useful checks and balances.Footnote 62
Indeed, a more granular analysis of the “opportunities versus risks” utilitarian frame based on the techno-solutionist promises and assumptions of tech companies often reveals that the binary equation is not a fair one. In practice, the supply-driven opportunities that these technologies offer can be “solutions in need of a problem” for which humanitarian action constitutes an interesting testing ground.Footnote 63 This framing can reverse the humanitarian problem identification process from a problem-driven approach to one driven by solutions.Footnote 64 Instead of asking if and how new technologies can help alleviate suffering, the leading question becomes if and how humanitarian needs can help “keep up” with new technologies. Instead of being a means to achieve humanitarian ends, the use of new technologies becomes the end, and humanitarian needs the means to achieve it. Instead of being “bottom-up” and triggered by operational needs and challenges, innovation becomes “top-down” and justified by considerations of competitive relevance and the availability of technology and funding.Footnote 65
As a result, technological choices are often used to address the problems of humanitarian organizations, their donors and their partners. For example, digital advances have improved traceability for fraud prevention and security by improving the identification and authentication of people in need – but this process can sometimes create or reinforce a de facto presumption that affected populations are potential fraudsters or security threats, highlighting a reversed burden of proof and a lack of trust in them.Footnote 66 New technologies have also improved scalability, creating economies of scale in operational delivery methodsFootnote 67 and enabling bureaucratic cost reduction for humanitarian organizations – but this process can lead humanitarians to neglect the differences and specificities of individual contexts. These dynamics can reverse humanitarian logic: while the opportunities are mostly benefiting the providers of humanitarian aid and their partners, the risks are mostly carried by the populations at the receiving end, in particular when the personal data they provide to feed innovative digital solutions are not adequately protected.
Any humanitarian interventions, innovative or not, are likely to cause some degree of harm.Footnote 68 Utilitarian ethics illustrate this reality in requesting a positive balance between that level of consequential harm and the greater good achieved. What is fundamental is that this calculus is made explicit, adequately assessed, and accounted for to the extent possible. In the context of humanitarian innovation, those requirements often seem to have become neglected afterthoughts.Footnote 69 Despite efforts to improve accountability to affected populations and increase their participation in the design and delivery of humanitarian responses,Footnote 70 affected people still do not really contribute to decision-making or risk analysis attached to the specific deployment or use of innovative technological solutions by humanitarian organizations.Footnote 71 When they are consulted, it is often to support a confirmation bias, and without allowing or helping them to truly understand what is at stake.
The attached risks are therefore imposed on them, sometimes without their knowledge or truly informed consent – including when those are required due to the processing of sometimes sensitive personal information.Footnote 72 It is often argued that, when asked, affected people would certainly want to have access to innovative digital tools and that it would be paternalistic to deny it to them. Like everyone else, affected people have a fascination bias towards technology, and a desire not to be left behind in the wake of the digital transformation. But this does not mean that they understand what the risks attached to digital technologies are in their contexts, or that many of the safeguard mechanisms available to others are necessarily available or functioning where they find themselves. To better respect their safety, dignity and autonomy, humanitarians have the duty to go beyond this assumption and to help them be in a better position to make a truly informed decision. This requires explaining to them, in a language they understand, why a specific technology is used for a particular programme, and translating into real-life examples what the risks and consequences of using that technology may be.
In many situations, however, affected people are not even asked for their opinion, and this is also true for the use of innovative technology for humanitarian action. It is often assumed that in their dire situation and exceptional circumstances, the risks attached to the digitalization of humanitarian responses are justified by the potential gains.Footnote 73 In the context of humanitarian emergencies, suffering and urgency are sometimes used as excuses to justify experimentation, exceptionality and higher risk appetite at the cost of safety and ethical guaranteesFootnote 74 that can be seen as obstacles to action and immediacy. This trade-off management approach is ethically questionable because it contradicts the “do no harm” requirement attached to the humanitarian principle of humanityFootnote 75 and does not respect the agency that is so central to the dignity of affected people. When technological experimentation can cause or lead to real additional human harm, it creates a risk of defeating the very purpose of humanitarian action: the alleviation of suffering. And when new technologies are deployed without the participation or consent of affected people, they neglect those people's agency and ability to participate in decisions that affect their lives. These potential drawbacks are significant pressure points on the ability of humanitarian actors to operate in line with the principle of humanity.
Eroding humanity through datafication and automation?
The principle of humanity embodies the raison d’être of humanitarianism, and if there “were to [be] only one principle, it would be this one”.Footnote 76 It is a principle superior to the other humanitarian principles because it captures the motivational and founding values of humanitarian action. Humanity is indeed humanitarian action's engine, compelling humanitarians to do as much as they can to save lives, reduce suffering, and improve the well-being and respect for the rights and dignity of people affected by humanitarian crises. Digital technologies can help them do this in many ways, and humanitarians have a duty to explore, within the limits of their mandate, if and how these tools can help them advance this fundamental objective, while doing no harm – or rather, while minimizing as much as possible the unintended harms they may create.Footnote 77
In his Commentary on the Fundamental Principles of the Red Cross, Jean Pictet explained how the principle of humanity requires humanitarians to “not threaten … the lives, integrity and the means of existence” of populations in need, and to “have regard for their individual personality and dignity”.Footnote 78 Writing in 1979, Pictet anticipated the need to interpret these considerations in light of historical evolution, indicating that “it would be useless and hazardous to enumerate all [that the principle] constitutes, since it varies according to circumstances”.Footnote 79 It seems clear today that the digital transformation represents a new “circumstance” significant enough to be factored into the modern interpretation of the principle of humanity.
Generative AI and the risk of degenerative humanitarianism
One of the key tenets of the humanity principle resides in the sentiment or attitude of someone who shows themself to be human.Footnote 80 Yet, one of the objectives of the digital transformation – particularly with AI – is to use technology to perform tasks normally carried out by humans. In a sense, it aims to “de-humanize” certain activities. As a result of digitalization, some humanitarian activities and processes are likely to become literally less human because the professional “aid deliverer”, or the interface that represents it, becomes a machine rather than a human being. In some domains – such as information or financial management – this transformation is not necessarily problematic, and can reduce the burden attached to repetitive and unpleasant but necessary bureaucratic tasks.Footnote 81 In others, where empathy is important, it raises significant questions related to the increasing disappearance of humans, and their ability to demonstrate empathy and understanding, in the delivery and management of humanitarian activities.
Respecting the dignity of people in need implies the ability to show empathy and to understand their situation or feelings. This requires an ability to listen and to discern the complexities and nuances of their experiences, as dignity is a personal feeling that is necessarily self-defined.Footnote 82 This explains why humanitarians have always attached importance to being physically present where affected populations find themselves. When humans are replaced by digital interfaces that introduce different forms of intermediation and remoteness – for instance, when a trained humanitarian worker able to show empathy is replaced by a smartphone app for “self-registering” needs – this essential proximity element is mechanically lessened.Footnote 83 It is therefore important that humanitarians strive to use digital tools to enhance, and not replace, human interactions – but it is essential to understand that their efforts to do so will be jeopardized by the pervasive nature of digital technologies, which tend to spread and expand organically.Footnote 84
An illustrative example of the digitalization of human interactions is the development of “humanitarian chatbots”. It is argued that “in recent years, chatbots have offered humanitarian operators the possibility to automate personalised engagement and support, inform tailored programme design and gather and share information at a large scale”.Footnote 85 According to the Office of the UN High Commissioner for Refugees (UNHCR), chatbots
represent an opportunity to engage at scale, ensure that data is adequately captured, securely stored and shared with front-line staff, who are currently wading through ad-hoc unstructured requests for support. … The advent of artificial intelligence presents an opportunity. The capacity for technology to navigate human speech and text has evolved to such an extent that it is becoming ever more possible and plausible to create dialogue and understanding to the level where … users cannot discern between a human and a machine.Footnote 86
Without entering into a detailed analysis of the pros and cons of these tools – which others have aptly analyzedFootnote 87 – the above statements confirm that these innovations are mostly geared towards organizational interests (i.e., data collection and scalability) and rely on the assumption, or confusion, that humans and machines are interchangeable.
But research is showing that they are not. Indeed, humans do have a natural tendency to anthropomorphize technological innovations and give them human attributes that they do not possess. The so-called “Eliza effect”Footnote 88 is a cognitive bias associated with textual interface computer programs, leading users to believe that the machine has human capabilities such as intelligence or empathy. Impressive progress in machine learning and large language models (LLMs) in recent years has considerably improved chatbots’ performance and has magnified this cognitive bias (on which AI marketing lexicon and visuals heavily relies),Footnote 89 leading to debates on whether these machines can be “sentient”Footnote 90 or perform “any” human tasks (i.e., “artificial general intelligence”).Footnote 91
A recent scientific study demonstrated that in the health-care domain, patients tend to prefer the quality and empathy of chatbots’ responses to their medical questions compared to the responses of physicians.Footnote 92 This finding highlights the power of the AI systems behind chatbots and their ability to generate an illusionary feeling of empathy in their users. Yet, clinical experts have documented the negative secondary impact of artificial empathy in terms of trust and effective care, pointing to the impossibility of truly replacing human empathy with an AI version of it.Footnote 93 According to them, the illusion is eventually counterproductive, and it is critical to maintain “human monitoring and emotional intervention” in order to ensure effective empathy. Ignoring that requirement can trigger “difficult moral and legal responsibility” issues for medical professionals.Footnote 94
This lesson is highly relevant for humanitarians, for whom the concepts of care, trust and effectiveness are indispensable. The negative long-term secondary effects that chatbots may have on the effectiveness of humanitarian care and the trust of the populations who now partly depend on these technologies put at stake the human elements without which the humanity principle cannot be operationalized. Care and trust are not productivity metrics, but essential requirements of the humanitarian endeavour. Without appropriate safeguards, the growing use of chatbots and other “generative AI” systems risks becoming the seed of degenerative humanitarianism.
Protecting data is protecting people
Digitalization brings other challenges vis-à-vis the principle of humanity. Some have argued that the process of “datafication” – which turns information about people into data points feeding “quantitative processes that used to be experienced qualitatively”Footnote 95 – is itself a dehumanization process necessarily reducing people's complex realities and identities to intangible data and predefined categories.Footnote 96 It seems indeed questionable that the binary nature of data,Footnote 97 or the categorization labelling requirements of databases and LLMs, can adequately capture the complex and multifaceted identities, experiences and needs of people affected by humanitarian crises.Footnote 98 Instead of making data fit for humanitarian needs, the focus is on fitting needs into data and getting as much data as possible about them.Footnote 99 Unstructured, localized, nuanced and qualitative information are not in sync with the data hygiene requirements needed to facilitate data flows.Footnote 100 “Datafication” can therefore lead to “ignor[ing] or even smother[ing] the unquantifiable, immeasurable, ineffable parts of human experience”Footnote 101 and can reduce affected people to their “electronic double”, or mere digital avatars.Footnote 102 When these important nuances are lost to quantitative methodologies supporting the auditing and securitization objectives of digital innovation,Footnote 103 there is a significant risk that those methodologies will end up replacing qualitative ones, eventually eroding the quality of humanitarian responses.
Other concerning threats hide behind the growing number of humanitarian data flows. Humanitarian data streams contribute to understanding and documenting needs or international humanitarian and human rights law violations, but they can also “be a causal vector for harm” through “unpredictable or unpredicted knock-on effects”.Footnote 104 For example, if such data are not adequately protected, personally and demographically identifiable information – combined with other, less sensitive data through a “mosaic effect”Footnote 105 – can help identify and locate people for surveillance or targeting.Footnote 106 Such data can also help to draw up their political or emotional profiles for influence or manipulation purposes.Footnote 107 In many countries and for most people, the mis- or repurposed use of these data usually leads to targeted online advertisements and unsolicited commercial offers – what has been described as “surveillance capitalism”.Footnote 108 In conflict and other violent settings, it may expose vulnerable people or communities to a different set of risks and can lead to targeted killings, arrests, persecutions or disinformation and manipulation. As recalled by former US Central Intelligence Agency director General Michael Hayden, security agencies use seemingly innocuous metadata to kill people.Footnote 109 While is it virtually impossible to know if and when humanitarian data are used to target or persecute people, the absence of evidence should not be taken as evidence of absence, and the risk needs to be taken seriously.
The humanitarian sector has made enormous progress in recent years in acknowledging the importance of personal and humanitarian data protection,Footnote 110 and in minimizing the potential harms triggered by the sector's digital activities, in line with the so-called “do no digital harm” concept.Footnote 111 Most organizations have adopted data security policies and guidelines, hired experts and set up data protection offices. Yet, these tools are inspired by data protection rules and precepts – in particular the European Union (EU) General Data Protection Regulation (GDPR)Footnote 112 – created by and for countries with significant legal and technical means, and even in those countries, their implementation remains an enormous challenge.Footnote 113 In places affected by conflict and disasters, where institutional safeguards and control mechanisms (such as data protection legislations or controlling authorities) are often dysfunctional or non-existent,Footnote 114 implementing data protection standards becomes virtually impossible.
Ensuring effective data protection requires informing “data subjects” about why and how their data is being collected, used or shared. Humanitarian organizations face significant transparency and accountability challenges with regard to their increasing data collection reliance and helping affected population understand the related trade-offs.Footnote 115 It requires time and effort to translate complex technical issues into a language that people understand; it also implies breaking the power imbalances that characterize aid provision and offering true alternatives to the data-for-aid bargain now so often implicitly embedded in digital humanitarian processes.Footnote 116 Yet, in the context of emergency humanitarian responses, these important requirements can sometimes be considered as burdensome obstacles to speed and scale, or disregarded based on certain assumptions.Footnote 117 As a result, informed consent becomes an afterthought, increasingly neglectedFootnote 118 in favour of other data collection bases such as the “legitimate interests” of the organizations who collect the data – sometimes behind the shield of legal privileges and immunities, which are important safeguards that should not be misused.Footnote 119
As most people have experienced, clicking on the “I agree” button to accept the lengthy and complex “terms and conditions” applying to digital services without understanding them is not exactly a satisfying experience of informed consent, control or agency.Footnote 120 There are valid concerns that the “informed consent” concept is no longer fit for purpose in a digitalized world controlled by asymmetrically powerful tech companies.Footnote 121 However, by neglecting informed consent in their practices, humanitarians are imposing risks on affected populations without having adequate means to be responsible or accountable for their potential consequences.Footnote 122 This effectively turns “data subjects” into “data objects”Footnote 123 and risks violating their agency, autonomy and dignity.Footnote 124 This is a fundamental design problem that is amplified as digital solutions proliferate.
Effective digital solutions also demand strong data and cyber security.Footnote 125 Despite investments in recent years, humanitarian organizations’ growing “cyber perimeter” increasingly exposes them to data leaks and cyber security incidents.Footnote 126 The January 2022 data breach that affected the International Committee of the Red Cross (ICRC) illustrated the “infinite vulnerability”Footnote 127 of humanitarian digital ecosystems to these growing threatsFootnote 128 – and the chimeric nature of cyber security.Footnote 129 Other incidentsFootnote 130 have highlighted how human errors and lack of digital security awareness remain the weakest cyber security link.Footnote 131 The emerging question is whether humanitarian organizations should continue increasing their reliance on digital tools and processes when they do not have the means to effectively control them or be accountable for them,Footnote 132 at serious costs for the safety and dignity of the populations that these organizations are meant to protect. As recently put by a cyber security expert,
[i]f you're an NGO working in conflict zones with high-risk individuals and you're not managing their data right, you're putting the very people that you are trying to protect at risk of death. … If you're trying to protect people but you're doing more harm than good, then you shouldn't be doing the work in the first place.Footnote 133
In his commentary on the humanity principle, Jean Pictet highlights that “restorative action … must be accompanied by preventive action”.Footnote 134 Understanding that the digital transformation brings new dimensions of risks that demand more preventive efforts to avoid creating more suffering and protection needs has become urgent. In contemporary humanitarian settings, “doing no harm” means addressing the data and cyber security dimensions of human safety, security and dignity.Footnote 135 As the gap between policy and practice continues to grow,Footnote 136 humanitarians should pause and reflect on how to reduce it.
Impartiality in a world of digital exclusion and algorithmic biases
The principle of impartiality is a functional enabler for the principle of humanity. Inspired by considerations of equality and equity, it also transposes medical ethics considerations onto humanitarian action and is a critical tool for the triage and prioritization of needs. Impartiality requires humanitarians to prioritize “the most urgent cases” objectively, and to provide aid without any “discrimination as to nationality, race, religious beliefs, class or political opinions”.Footnote 137 Despite being “self-evident”, impartiality is “nevertheless difficult to apply fully in real life, where it encounters numerous obstacles”.Footnote 138
Putting impartiality into practice is hard for many reasons. The areas where those most in need are located can be difficult or impossible to reach because of security, logistical or administrative reasons. Active hostilities, impracticable roads or “no-go areas”, and security blockades can prevent or limit access, further fuelling the “bunkerization” of humanitarian action.Footnote 139 Marginalized groups can become invisible, hiding out of fear, or hidden for political reasons by those in power. Sometimes, humanitarians lack the means or time to analyze their situation with sufficient detail or context – and even when they do, human biases can distort their analysis. In short, impartiality is as important as it is difficult to achieve, and practice has shown that it should never been taken for granted.Footnote 140
When “big data” turns into “bad data”
Early on in the digital transformation, technological advancement seemed like a solution to the problems described above. In the 2000s, the concept of “big data” emerged as a powerful way to gather insights from a large variety of data sources and bridge the information gaps that jeopardized public action's relevance and effectiveness.Footnote 141 By leveraging and combining the multitude of data generated by digital technologies, the hope was that “big data analytics” would help “find unexpected connections and correlations”, “make unusually accurate predictions”Footnote 142 and support better-informed decision-making and practices. Computerized and algorithmic management of an “overwhelming amount of information” could help to capture the data falling through the cracks of existing analogue processes.Footnote 143 While the use of data and analytics was not new to the administration of public action, the advancement of computing capacities that came with digital innovation made everything easier and faster.Footnote 144
This information “management revolution”Footnote 145 was attractive for humanitarian operators struggling to make good use of the data collected through their activities, and to leverage external information to better understand the complex dynamics of their operating environments.Footnote 146 The United Nations (UN) and other international organizations rapidly committed to leveraging big data and placed it at the centre of their operational and development strategies.Footnote 147 This triggered a systemic shift in which data moved from the periphery to the centre of humanitarians’ agenda and practices, including vis-à-vis donors and partners, who increasingly demanded access to the data and “evidence base” supporting strategic and programmatic choices.Footnote 148 Like in the private sector, data became “the new oil” and a key “value extraction” tool to support progressFootnote 149 and “boost humanitarian investments”.Footnote 150 If the analogy between oil and data has been criticized for its limits,Footnote 151 it is nevertheless helpful to understand that like oil, data is a finite resource which can similarly contaminate and damage the environment.Footnote 152 This is also true in the humanitarian environment.
The 2014 international response to the Ebola pandemic in West Africa illustrates this problem.Footnote 153 As this example highlights, instead of supporting better-informed decisions, the multiplication of data flows “invites the problems of digital systems into the most fragile and vulnerable environments in the world”,Footnote 154 outpacing the capacity of disaster responders to make sense of themFootnote 155 and adding significant layers of complexity to coordination.Footnote 156 Instead of supporting prevention and preparedness, big data can create an overload of not-so-relevant information that can obscure the ability of responders to see, interpret and respond to factual realities.Footnote 157 The inherent limitations of data, which are often at best incomplete and at worst inaccurate, are necessarily amplified through the magnifying effect of big data analytics. This can lead to ineffective or counterproductive interpretations, turning “big data” into “bad data”Footnote 158 and reinforcing existing inequalities or creating new forms of discrimination.Footnote 159
It is now commonly agreed that AI systems have an intrinsic and significant bias problem.Footnote 160 This well-documented issue originates from the datasets feeding algorithmic and machine learning systems, which reflect the systemic discrimination and inequalities embedded in the societal realities they aim to capture.Footnote 161 This is particularly true for raceFootnote 162 and genderFootnote 163 discriminations that are so deeply entrenched in societies and so well reflected in generative AI systems.Footnote 164 The problem with these algorithmic biases is that, unlike their human counterparts, they are projected at scale and are often more difficult to identify, explain and rectify.Footnote 165 The internal functioning of algorithms is most often a “black box” commercial secret, carefully protected by the private companies who own these systems.Footnote 166 Even the engineers and scientists who create the algorithms can have difficulties deconstructing their opacity and explaining how they transform inputs into outputs.Footnote 167
These transparency and explicability issues are particularly problematic in the humanitarian domain because they limit humanitarians’ ability to explain the causal relationships between needs and responses, and to demonstrate the objectivity of the assessments they rely on.Footnote 168 To demonstrate their impartiality, humanitarians must be in a position to explain the proportionality of their response to existing needs – in particular why some are addressed and others are not. When data and algorithmic biases are unidentifiable, they can hide and spread, contaminating needs assessment models, obstructing the ability to explain how proportionality was evaluated and threatening the impartiality of responses. In practice, they risk defeating humanitarians’ efforts to be more accountable for the use of the precious resources they have, and towards the populations they serve.Footnote 169 The proliferation of AI systems in humanitarian action can therefore bring more, not less, opacity and problems for impartiality, accelerating “the divide between the haves and the have nots”.Footnote 170
Digital divides and exclusions
The second tension that increasing digitalization introduces to humanitarian impartiality consists in the “digital divides” and exclusions resulting from the digital transformation. In 2023, an estimated 2.7 billion people, roughly a third of the global population, remained without access to digital connectivity.Footnote 171 These divides are unequally distributed across the globe, and are more present in Africa, the Middle East and Asia, where a large share of global humanitarian needs and operations are also concentrated.Footnote 172 Digital divides disproportionately affect people living in rural and hard-to-reach areas, who also constitute a significant share of people in need of humanitarian assistance.Footnote 173 These quantitative divides are compounded by qualitative ones. Women and girls,Footnote 174 people with disabilitiesFootnote 175 and people with low levels of educationFootnote 176 tend to be more excluded from digital connectivity. Many others do not have access to “meaningful” and reliable connectivity,Footnote 177 expanding the nature and impact of digital exclusion to a large share of connected people who access the internet through unsafe digital devices or infrastructures.Footnote 178
These compounded divides leave billions of people “who do not routinely engage in activities that big data and advanced analytics are designed to capture” on the digital periphery.Footnote 179 In addition to the algorithmic errors discussed above, these “big data exclusions” result in “another type of error that can infect datasets …: the systemic omission of people who live on big data's margins, whether due to poverty, geography, or lifestyle, and whose lives are less ‘datafied’[,] … distorting datasets and, consequently, skewing the analysis” on which humanitarians increasingly depend to assess needs and prioritize their responses accordingly.Footnote 180 These exclusions create a “new kind of voicelessness” and have profound impacts on already marginalized people and communities in terms of representation and inclusion, potentially jeopardizing their access to impartial humanitarian assistance.Footnote 181
In response, the international community has engaged in a new effort to achieve universal and meaningful connectivity by 2030.Footnote 182 The UN High Commissioner for Human Rights recently called for a new right to access the internet,Footnote 183 and an increasing number of civil society organizations are demanding better legal recognition of the fundamental role that digital connectivity can play in enabling access to health care, education, workFootnote 184 and other services essential to people's survival and well-being in humanitarian situations.Footnote 185 They are, unsurprisingly, supported by tech companies,Footnote 186 which are already providing “free” connectivity in developingFootnote 187 or conflict-affected countries.Footnote 188 Humanitarian organizations are joining the effort, developing “connectivity as aid” delivery capacities, despite the significant operational and ethical dilemmas implied.Footnote 189 Yet, in parallel, others are calling for a “right to not use the internet”, observing that the digital transformation has turned connectivity “into a de facto obligation for anyone” who wants to exercise their fundamental rights.Footnote 190 While many are demanding more of it, others are asking to be able to opt out from connectivityFootnote 191 because it “helps reproduce … inequality and external control rather than ameliorate such conditions”.Footnote 192 Acknowledging these people's perspective and respecting their choice is an increasingly relevant consideration in the difficult exercise of ensuring humanitarian action's impartiality in the digital era. In other words, digital connectivity should be a genuine choice – available for those who want it, and not required for those who do not. Humanitarian aid should be adapted to both, so that their needs, voices and perspectives are equally taken into account in the difficult proportionality assessment exercise that is required by the principle of impartiality.
Neutrality and independence in a fragmented digital world
The debates and nuances discussed so far also serve to highlight how digital connectivity and the tools that come with it have become contested political issues that humanitarians need to understand and treat as such in order to preserve their commitment to the humanitarian principles of neutrality and independence. Neutrality and independence are political concepts often discussed in the context of international relations and how States and other political entities relate to one another. In the humanitarian domain, they have a similar meaning – i.e., avoiding political or ideological affiliations – but also specific operational and practical dimensions.
Neutrality is a strategic and tactical tool enabling humanitarians “to enjoy the confidence of all”Footnote 193 – parties to armed conflict, affected populations and donors, and other humanitarian crisis stakeholders. It requires humanitarian actors to “not take sides in hostilities or engage at any time in controversies of a political, racial, religious or ideological nature”Footnote 194 but is often misunderstood or criticized as an excuse for silence or inaction.Footnote 195 In fact, neutrality is what allows humanitarians (at least those who have chosen to abide by this principle, which is not a requirement for all forms of humanitarian actionFootnote 196) to operate in polarized and insecure environments, and across front lines. It helps preserve the credibility and weight of their voice when they speak out on humanitarian issues and take the side of the victims of violations of international humanitarian and human rights law.Footnote 197 Neutrality is a means to achieving humanitarian ends, requiring difficult choices and putting aside personal preferences and opinions in order to be able to help those in need. It is not a choice of convenience, but a contextual requirement for action.
The principle of independence is directly related to neutrality. It requires humanitarian organizations to remain detached from political, military, economic or religious powers and from the strategies that are associated with them.Footnote 198 It is a practical way to demonstrate neutrality. In a globalized world of interconnectedness and interdependencies,Footnote 199 the principle of independence needs to be implemented in context. It is often about effectively managing the dependencies that humanitarians cannot avoid (access depends on parties to conflict, funding depends on donors, acceptance depends on populations, etc.) and striving to preserve a sufficient level of operational autonomy to act with impartiality and to be perceived as neutral.
Throughout history, humanitarian actors have always had to navigate turbulent political waters to maintain their neutrality and independence, with a constant need to circumvent the “if you are not with us, you are against us” framework of the times. This binary approach regularly flares up as non-international conflicts continue to internationalize, and as international conflicts between States re-emerge,Footnote 200 leaving humanitarian organizations stuck between a rock and a hard place, at the mercy of polarizing information ecosystems and disinformation campaigns against them on social media.Footnote 201
Humanitarians and the geopolitical digital chessboard
Digitalization is making humanitarian neutrality and independence more difficult. It is indeed often not enough for humanitarian actors to be neutral and independent – they must also be perceived as such. In the current competitive and polarized environment, the decision to use a particular technological tool is likely to be increasingly perceived as a political choice, and one associated with those behind the tool.Footnote 202 Choosing a US-based tech provider is not the same as choosing a Chinese or European one, because each provider abides by a different legal and political framework representing the preferences of its associated State. Indeed, in the increasingly competitive international environment described by Eric Schmidt at the beginning of this article, tech is about power, and therefore politics.Footnote 203 States and private actors are racing to control digital technologies and the supply chains behind them – from the rare-earth materials from which microchips are madeFootnote 204 to the skills and machines required to build those chips, and all the way to the infrastructures and data that are needed to make them functionFootnote 205 – and are building coalitions to increase their commercial and political influence at the global level.Footnote 206 While this competition is not new, the stakes are increasing, as is the impact on the humanitarian sector.Footnote 207
When donors, host States or partners are promoting or demanding the use of specific technologies or brands in the context of humanitarian action, this reduces the choice that humanitarian organizations can make to select the tools that best fit their needs and operational constraints, thus de facto challenging their operational autonomy and independence. When those technologies or brands also happen to be used for military or security purposes by parties to conflict or entities associated with them, there is a risk that such a choice will be perceived as a political one, thereby impacting the perception of humanitarian organizations’ independence from those parties and entities and jeopardizing the perception of their neutrality. As the current debates on sanctions against certain Chinese companies and technologies demonstrate,Footnote 208 States are using sanctions and other restrictive measures to influence who can use what technologies. Just as sanctions in other domains contain exceptions to exempt humanitarian actors from their scopeFootnote 209 so as to preserve the perception of their independence and neutrality from political decisions, it seems increasingly relevant to also preserve such actors from undue political interference or cooptation through the choice of technology used for humanitarian purposes. Indeed, humanitarians’ growing reliance on digital technologies is an interesting vector for expanding the geo-strategic digital battle into new territories, and humanitarian organizations risk becoming de facto tactical pawns on the international chessboard for digital hegemony.Footnote 210 Understanding the digital dimensions of neutrality and independence can help them find a way out of a game they are not meant to play. This starts by understanding the digital transformation's political economyFootnote 211 and discerning the roles that its key actors play in order to better delineate the parameters through which humanitarians should relate to those actors in their efforts to best protect the perception of their neutrality and independence.
First, the origin of most truly innovative digital technologies over recent decades can be traced back to the research and development (R&D) activities of the main actors of conflicts: States’ armed and security forces. The internetFootnote 212 was created in the laboratories of the US Department of Defense's Advanced Research Projects Agency (ARPA).Footnote 213 Drones were born on the battlefield to support air warfare capabilities.Footnote 214 The machine learning tools that power commercial AI applications are often inspired by innovations geared for military purposes, and the “Turing test” that defines their level of “intelligence” was invented in a military context.Footnote 215 Today, facial recognition and biometrics are mostly used for security purposes, from identifying potential “terrorists” to managing border security and migration flows.Footnote 216 In short, if many digital technologies are “dual-use” (i.e., both military and civilian) in nature, it is important to remember that they were often initially created and designed for military and security purposes, and that they are increasingly central to States’ agendas and strategies in these domains. When humanitarian actors choose to use the same technologies or tools as political and military actors, they should take into account that affected people and parties to conflict may perceive such a decision as an affiliation by association with those actors – even though in practice, they often have no choice considering the ubiquity of those technologies or brands and the absence of any truly adequate alternative choices.
Over the years – and despite significant increases in States’ defense budgets across the worldFootnote 217 – the R&D capacities of States have often been outpaced by those of tech companies, which have thus become critical strategic partners in the innovation race. These relationships have grown closer – and more blurred. In China, Russia, the United States, Europe and elsewhere, States have always been important investors for tech companies, but the digital transformation has brought their symbiotic relationship to a new level.Footnote 218 Tech companies’ investment and governance structures are evolving accordingly, causing some to describe the traditional distinction between the public and private sectors in the tech domain as a “myth”.Footnote 219 The “revolving door” between tech companies and governments, which are alternatively managed or advised by the same individuals, illustrates the growing difficulty – including for humanitarian actors in search of neutrality and independence – of distinguishing between political and commercial actors and delineating their respective agendas.Footnote 220
This blurring of the lines is also reflected in tech companies’ growing influence in conflict and humanitarian settings.Footnote 221 From supplying connectivity to the deployment of digital means of warfare,Footnote 222 to providing technical advice and support for cyber defence,Footnote 223 to complying with government requests for data or “takedowns” of information content on their platforms,Footnote 224 tech companies increasingly find themselves in the middle of the battlefield.Footnote 225 Sometimes this is largely involuntary and due to the dependencies attached to commercial partnerships with States, or to ownership of digital infrastructures.Footnote 226 In other situations this alignment is triggered by a strategic decision to support parties to a conflict,Footnote 227 in line with the tech company's interests or values.Footnote 228 In recent years, tech companies have proactively invested in their capacity to influence the global political agenda, including its security dimensions.Footnote 229 Tech companies’ founders and CEOs – and a growing number of lobbyists working on their behalf in places of powerFootnote 230 – are leveraging the space left by the erosion of traditional multilateralism in order to advance their own agendas,Footnote 231 and are becoming increasingly political actors from which humanitarian organizations should maintain as much independence as possible.
Tech companies’ objectives are multiple and diverse, sometimes overlapping with and sometimes diverging from States’ – and humanitarian – objectives. This complexifies the traditional independence principle. In some cases, tech companies’ objectives (which have included moving people to MarsFootnote 232 or acquiring guns and a safe refuge in case AI drives humanity to its extinctionFootnote 233) can overlap, or create tensions, with humanitarian objectives. For instance, the “effective altruism”Footnote 234 or “long-termism”Footnote 235 allegedly driving the use of digital technologies for social goodFootnote 236 seem generally aligned with humanitarian ambitions, but in practice, the methodologies employed in such projects raise significant questions about their short-term impact and the negative consequences they are already having.Footnote 237 Some of these projects, involving the collection of iris scans from populations with limited income in exchange for digital identities and currencies,Footnote 238 have raised data protection alarmsFootnote 239 and concerns regarding data extractivism.Footnote 240 The humanitarian marketing veil used to promote these private sector initiatives feeds “hybridization” concernsFootnote 241 because such projects can create a dangerous confusion with purely humanitarian endeavours, in particular when they are carried out through partnerships with the tech companies behind the systems involved.Footnote 242
The evolution of tech companies’ political posture and ambitions changes their identity and perception, and this impacts their relationships with humanitarian organizations, with potentially negative consequences for the perception of the latter's neutrality and independence from associated political and strategic objectives. Tech companies’ dominance over the digital transformation amplifies the asymmetries of power that characterize their partnerships. It also increases the risks of “aidwashing”, through practices that “involve the use of corporate social responsibility initiatives and … partnership with aid actors to burnish surveillance firms’ reputations and distract the public from corporate misbehaviour, ethical misdeeds, and dubious data practices”.Footnote 243 The problem of “dual loyalty” that can deflect humanitarians’ “primary loyalty to … those affected by crises” towards the third-party digital service providers on which they rely has “real implications for the rights and needs of affected people”.Footnote 244 While the trade-offs attached to these partnerships can be positive in terms of efficiency and scale, they can also negatively impact the perception of humanitarians’ independence and the attendant trust of the populations they serve.Footnote 245
While humanitarian organizations battle to reconcile their history with renewed and legitimate questions about colonialism,Footnote 246 their contribution to the expansion of the digital transformation of fragile countries is fuelling concerns about their independence.Footnote 247 The growing debates around the role of humanitarians in “techno-colonialism”Footnote 248 and “digital extractivism”Footnote 249 highlight the concerns that the historical intersections between colonialism and humanitarianism are repeating along the same routes and power asymmetries.Footnote 250 Instead of natural resources and workforce, “techno-colonialism” aims at extracting data from the “digital bodies”Footnote 251 of people and communities of the global South in order to fuel the “fiefdoms”Footnote 252 and data-hungry surveillance business models of multinational tech companies. While these concerns are debatable, they call for serious examination by humanitarian organizations that have placed “localization” high on their agendasFootnote 253 but are technologically moving away from it.Footnote 254
Anecdotal examples of how digitalization is impacting the perception of humanitarian actors’ neutrality and independence are multiplying. The example of the UN World Food Program (WFP) partnership with the data analytics firm PalantirFootnote 255 – a key partner of many security agencies across the worldFootnote 256 – illustrated the growing concerns around “surveillance humanitarianism”.Footnote 257 There is indeed a “significant but little understood” risk that these partnerships are used by the military and surveillance tech industry to gain access to strategic information, new markets and data streams,Footnote 258 notably through national security-based legislation enabling possible “backdoors” or data access requests.Footnote 259 In conflict settings, that risk has already been identified by authorities who allegedly stopped WFP's assistance programmes due to concerns about the further use of their biometrics registration data.Footnote 260
Humanitarians are familiar with accusations of spying and partiality, which are common in polarized conflict settings. Regrettably, such accusations can become more difficult to reject when humanitarian actors rely on the same tech suppliers as conflict parties for data storage or connectivity.Footnote 261 This suspicion grows when they partner with tech companies engaged in military activities.Footnote 262 When tech partners proactively become involved in conflict-related issues, humanitarian organizations become de facto hostages of those relationships, impacting their independence and the perception of it.Footnote 263
Digital dependencies and the “splinternet”
The growing dependenciesFootnote 264 that come with the digitalization of humanitarian action also trigger certain operational challenges. First, dependency on proprietary hardware and software to run operations creates a “vendor lock-in effect” that decreases the ability to choose alternative tools or providers – for instance due to perception concerns – without substantial switching costs.Footnote 265 Digitalization comes with a “ratchet effect”Footnote 266 that makes it difficult to stop using digital tools once they are integrated into operational structures; for example, cloud data storage is now often required to support digital services platforms at scale. This is particularly true for digital solutions that have been put in place for exceptional circumstances (such as temporary lack of access due to insecurity or pandemics): they tend to stay when those circumstances disappear, often because the investments behind them have significant amortization costs. In practice, such solutions have also eroded the operational resilience of humanitarian organizations, which have often disinvested in analogue alternatives – everyone uses a smartphone to coordinate field activities, but many do not know how to use a VHF radio, which may be life-saving when connectivity is down. In a world where internet shutdowns and connectivity denials are on the rise,Footnote 267 humanitarians should not over-invest in digital technologies at the cost of the ability to operate in low- or no-connectivity settings.Footnote 268
Another threat for operational independence is the fragmentation affecting the digital transformation backbone.Footnote 269 Once celebrated as a globalized level playing field, the internet has become a divided and contested space where States battle to assert their “digital sovereignty”.Footnote 270 While States struggle to align at the international level, they advance their respective strategies through massive investment campaigns (such as the Chinese Digital Silk Road initiative),Footnote 271 protectionist measures (such as US import/export controls and sanctions)Footnote 272 or regulatory action (such as the extraterritorial “Brussels effect” of the European Union GDPR or Digital Services Act).Footnote 273
The internet is turning into what some refer to as a “splinternet”Footnote 274 – i.e., a fragmenting digital and cyber space. This is making the internet increasingly difficult to navigate for humanitarians. One can imagine the practical challenges of running digital solutions in Africa, where the digital infrastructures are more likely to be Chinese, with technological tools often provided by US tech companies,Footnote 275 and data protection or sanction requirements from European donors. The absence of harmonized regulatory standards and interoperable technologies could become a real obstacle to the continuity and effectiveness of digital solutions.Footnote 276 To preserve the perception of their independence, it has become essential that humanitarian organizations anticipate these developments and prepare accordingly. An increasingly digital future requires them to upscale their ability to do so.
Charting the way forward: Towards principled digital humanitarianism
It is critical that humanitarians avoid the trap of tech dystopia and the fearmongering around it. The potential of new technologies to increase the effectiveness of humanitarian action is too significant to be missed. Excessive caution should not become an excuse to prevent the development of innovative humanitarian solutions that can make a positive difference in the lives of people affected by conflict, violence and disasters.Footnote 277
It is however also fundamental that humanitarians resist the temptation to jump onto the techno-solutionist innovation hype train without fully understanding where it is going. The current naiveFootnote 278 competition-driven approach to digital innovation is not adequate to manage the dilemmas that the digital transformation creates for principled humanitarian action.Footnote 279 The absence of a comprehensive conversation about the digital transformation of humanitarianismFootnote 280 – and the expanded responsibilities that should come with itFootnote 281 – has become a serious liability, with negative consequences for affected populations and principled humanitarian action.
Humanitarians must explicitly acknowledge the political dimension of the digital transformation and the risks of overly utilitarian approaches to it. They must better manage the fascination and confirmation biases that often characterize their relationship with digital technologies, and the partnerships that come with them. The humanitarian principles are a useful compass to guide a responsible approach centred on protecting affected people's rights and dignity, and on preserving the core elements of the humanitarian mantra that are so essential to the humanitarian mission. The principles are not justifications to shy away from digital risks, but a useful tool to mitigate them. They can become the common grammar that is missing to build constructive conversations between humanitarians and tech companies.Footnote 282 It has become urgent to better leverage the principles’ potential and effectively integrate them into the decision-making processes that define digital humanitarian strategies and operations, from innovation and procurement to partnerships and programmatic responses.
More specifically, the principle of humanity can help maintain the human-centred, rights- and needs-based approach that defines the humanitarian methodology. A precautionary attitude to the digital transformation would help better sync humanitarian innovation with needs and crisis settings, and the ethical tempo that such settings require. Understanding the digital dimensions of “doing no harm” and the significant role of data protection and cyber security for human securityFootnote 283 implies serious investment, including gradually turning digital literacy into a professional requirement for all humanitarians.Footnote 284
Articulating the risks that data and AI-based solutions bring for impartiality is essential to ensure that human biases are not replaced by unmanageable algorithmic ones. Investing in human interactions and proximity, leveraging social sciences such as anthropology and sociology, can help preserve the essential ingredients of humanitarian action. Understanding that human intelligence (despite its flaws) is the most effective safeguard against the risks that AI creates is essential. Efforts to minimize data collection, operationalize data protection and security, and establish effective mechanisms to engage affected people on the relevance, use and risks of digital solutions will be required to break the current disconnect and hypocrisy around accountability towards affected people.
Maintaining neutrality and independence in an increasingly polarized and fragmented digital world demands a critical review of the humanitarian sector's current approach to digitalization. Ensuring smart, sustainable and impact-driven digital investments can help enhance protective outcomes, operational autonomy and resilience. Making the right technological choicesFootnote 285 (including not using innovative technologies, when appropriate) – and being transparent about them – can help improve the perception of and trust in humanitarian neutrality and independence. This implies favouring relationships with non-profit academic and public actors driven by shared objectives, and exploring the relevance, potential and safety of free and open-source solutions.Footnote 286 It also implies exploring the possibility of developing autonomous R&DFootnote 287 for humanitarian technologies that better align, by design, with humanitarian objectives and requirements.
Humanitarians must explicitly acknowledge tech companies’ growing political and conflict role. This implies going beyond supply and partnership relationships, and reconfiguring relationships to ensure the possibility of engaging with these companies in a dialogue that addresses their impact on conflict dynamics, the protection of people affected by conflict and violence, and principled humanitarian action. Developing humanitarian “tech-plomacy”Footnote 288 capabilities can help anticipate geopolitical transformations and create a space for diplomatic conversations that better integrate the need to define what humanitarian neutrality and independence should look like in the digital sphere.
Humanitarians can and should do more to address these issues. States, donors and tech companies must support their efforts and respect and protect their commitment to humanity, impartiality, neutrality and independence. Now is the time to act to ensure that the promises of the digital transformation deliver positive outcomes for populations affected by conflict and disasters.