Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-24T18:06:11.509Z Has data issue: false hasContentIssue false

How harmful information on social media impacts people affected by armed conflict: A typology of harms

Published online by Cambridge University Press:  18 December 2024

Bailey Ulbricht
Affiliation:
Executive Director, Stanford Humanitarian Program, Stanford Law School, Stanford, CA, United States
Joelle Rizk*
Affiliation:
Digital Threats Adviser, Protection Department, International Committee of the Red Cross, Geneva, Switzerland
*
*Corresponding author email: jrizk@icrc.org
Rights & Permissions [Opens in a new window]

Abstract

Armed conflict presents a multitude of risks to civilians, prisoners of war and others caught in the middle of hostilities. Harmful information spreading on social media compounds such risks in a variety of tangible ways, from potentially influencing acts that cause physical harm to undermining a person's financial stability, contributing to psychological distress, spurring social ostracization and eroding societal trust in evidentiary standards, among many others. Despite this span of risks, no typology exists that maps the full range of such harms. This article attempts to fill this gap, proposing a typology of harms related to the spread of harmful information on social media platforms experienced by persons affected by armed conflict. Developed using real-world examples, it divides potential harm into five categories: harms to life and physical well-being, harms to economic or financial well-being, harms to psychological well-being, harms to social inclusion or cultural well-being, and society-wide harms. After detailing each component of the typology, the article concludes by laying out several implications, including the need to view harmful information as a protection risk, the importance of a conflict-specific approach to harmful information, the relevance of several provisions under international law, and the possible long-term consequences for societies from harmful information.

The information used for this typology is based entirely on open-source reporting covering acts that occurred during armed conflict and that were seemingly related to identified harmful information on social media platforms or messaging applications. The authors did not verify any reported incidents or information beyond what was included in cited sources. Throughout the article, sources have been redacted from citations where there is a risk of reprinting harmful information or further propagating it, and where redaction was necessary to avoid the impression that the authors were attributing acts to particular groups or actors.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press on behalf of International Committee of the Red Cross

In wartime, people might not only be caught in the middle of hostilities. While trying to avoid fire, navigate fear and seek safety, people may also be surrounded by rampant false or hateful information spreading rapidly across social media platforms and messaging applications (hereinafter referred to simply as “social media”).Footnote 1 Far from being impalpable, such information can tangibly harm persons affected by armed conflict – which include but are not limited to civilians – in a variety of very real ways, some of which may violate international humanitarian law (IHL) or international human rights law (IHRL).Footnote 2 Such information, for example, may spur individualized violent action against persons protected by the 1949 Geneva Conventions like civilians or prisoners of war,Footnote 3 on top of the ongoing hostilities that are inherent to armed conflict.Footnote 4 It may foster hatred or discrimination against certain communities, leading to their members being cut off from essential services,Footnote 5 or it may compromise a person's situational awareness, causing that person to make harmful decisions like avoiding necessary aid or evacuating using improper routes.Footnote 6 Such information may also cause harm in and of itself; hateful narratives, for example, may induce significant anxiety, fear or depression among individuals who identify as the targeted category, and narratives spreading false or misleading information about particular individuals may injure such persons’ reputations, even if no further action is taken against them.Footnote 7 In short, the spreading of such information on social media can lead to a multitude of harms that affect people caught in the middle of conflict in discernible, visible ways.

Currently, no typology exists that maps out the full range of harms influenced by information circulating on social media – be it individual pieces of content or narratives that develop from consistent threads of information – in wartime. Far from being a merely academic exercise, mapping out a more complete picture of such harms can help inform stronger, better tailored and more complete policy and legal responses to harmful information spreading on social media in armed conflict settings.Footnote 8 Indeed, the International Committee of the Red Cross (ICRC) considers understanding harms as integral to developing conflict-specific responses that reflect “the complex realities and vulnerability of people affected by war and violence”.Footnote 9

This article proposes a typology of harms related to the spread of harmful information on social media experienced by persons affected by armed conflict. The typology divides the harms into five categories: (1) harms to life and physical well-being, (2) harms to economic or financial well-being, (3) harms to psychological well-being, (4) harms to social inclusion or cultural well-being, and (5) society-wide harms. Each category of harm is presented in a table composed of three columns: examples of information that we classify as harmful, a subsequent offline harmful act, and the resulting harms.Footnote 10 The structure of the typology loosely follows existing typologies in other contexts that we have used for reference.Footnote 11

The typology was developed and populated using reported examples of harmful content or narratives on social media, along with documented subsequent harmful acts from current and past armed conflicts that occurred around the same time. We inferred connections between the content and the acts based on timing and content relevance, although in some instances, there is additional supporting evidence suggesting a link between the two. The examples are intended to provide concrete illustrations, but they are not meant to imply evidence of causality between the information and the harm. Instead, they are meant to demonstrate the range of possible harmful acts and/or subsequent harms that can be influenced by the spread of harmful information. In addition, the typology focuses on harms against people affected by conflict, and predominantly – though not exclusively – on harms affecting civilians. Given this focus, it is beyond the scope of this article to assess contexts other than armed conflict, such as disaster response or social unrest.

Information used for this typology is based entirely on open-source reporting, and no primary sources were used. The authors did not verify any reported incidents or narratives beyond what was included in cited sources. Throughout the report, sources have been redacted from citations where there is a risk of reprinting harmful information or further propagating it, and where redaction was necessary to avoid the impression that the authors were attributing acts to particular groups or actors.

Defining harmful information

The term “harmful information” as used in this article describes particular posts or pieces of content, or narratives (collection of posts or pieces of content espousing a consistent argument or idea), that are (1) false, misleading or hateful, or consist of or encourage violations of international law, in particular IHL, and (2) have the potential to influence harmful acts or otherwise contribute to harm against protected persons in conflict settings.Footnote 12 This definition, which relies on the ICRC's working description of information that is considered potentially harmful to persons affected by armed conflict, includes misinformation,Footnote 13 disinformation,Footnote 14 malinformationFootnote 15 and hate speechFootnote 16 as well as other forms of information that consist of or encourage violations of international law.Footnote 17 The definition intentionally focuses on the harmful effects of information in conflict.Footnote 18 For example, harmful information may not encompass all forms of wartime propaganda, even though such propaganda may involve deception;Footnote 19 however, it may include wartime propaganda when such propaganda has the potential to influence harmful acts or otherwise contribute to harm against people affected by armed conflict.

Theories underpinning the typology

Harmful information, be it particular posts or pieces of content, or else narratives that emerge from consistent threads, can influence the occurrence of harm via subsequent acts committed by an information consumer, or simply by leading to harm by itself with no intervening act. The theoretical assumptions underpinning the typology are that harmful information may be an influential contributor to intervening acts (referred to as “harmful acts” in this typology), or else to harm directly where no intervening act occurs.

Though it is difficult to prove linkages between information and harmful acts, this typology infers connections between the two by looking at timing and content relevance: should a harmful narrative or piece of content be circulating around the time certain harmful acts occur, and the information appears on its face to be relevant to the harmful act, we treat the harmful information as having potentially influentially contributed to the harmful act's occurrence. For example, in an armed conflict, a belligerent may regularly intentionally target civilians from a particular ethnic group at the same time that a harmful narrative targeting this same group is circulating on social media. The typology would assume, in this situation, that the harmful narrative is influentially contributing to the belligerent's behaviour, even where no further evidence exists of such a link. Where no act occurs, we similarly look to the timing and relevance between the information and resulting harm.

In addition, it is inapt to say that harmful information on its own directly causes harmful acts (see Figure 1), because that assertion ignores additional contextual factors that push people to interpret content in a certain light and act on it. Take this example: a piece of content about a man who is a member of an ethnic minority circulates on social media, falsely accusing him of committing an act of sexual violence against a woman who is a member of the ethnic majority group.Footnote 20 The content stokes so much furore that groups of people who read – and believe – the narrative are moved to subsequently attack and injure the falsely accused man and other members of his ethnic group.Footnote 21

Figure 1. Misconstrued relationship between information on social media and harmful acts in conflict settings.

In this example, it may seem like the content that was aimed at the man on social media directly caused the harm. However, that interpretation is misleading because it ignores the broader contextual factors that shape the behaviour of those consuming the narrative. In this case, there was a long-standing history of discrimination, stigmatization of the minority group was widespread, and the online information environment was made up of tens of thousands of hateful narratives directed at the minority ethnic group with which the man identified.Footnote 22 Given these various contextual factors, it is misleading to say that the content on its own caused people to take harmful action, although it likely played an important role. Instead, as noted earlier, we consider harmful information spreading on social media to be an influential contributor to harmful acts. Harmful information is part of a broader information environment at or around the time of the event, and this information environment is itself situated within a particular socio-political environment. In situations of armed conflict, the socio-political environment may be particularly fraught, made up of underlying causes of conflict like historical grievances, systematic inequalities, discrimination, intercommunal or ethnic rivalry, or poor governance. These contextual elements – both the information environment and the socio-political environment – form the context within which readers consume information, priming them to interpret content with particular biases and potentially even encouraging them to take action (see Figure 2).Footnote 23

Figure 2. Information properly situated within broader contexts.

For example, in a long-standing conflict already rife with distrust between two adversaries caused by a history of political violence, a singular hateful post on social media – such as calling for a village inhabited by one side to be “erased” – can be a trigger for violent riots that cause death, injury and property destruction.Footnote 24

Additionally, harmful information spreading on social media platforms can sometimes influentially contribute to harm without a harmful act at all (see Figure 3). For example, malinformation or disinformation about an individual accused of treason can lead to reputational damage and ostracization for that person, as well as triggering psychological distress. This type of harm exists regardless of whether certain acts are committed after such information spreads. Moreover, just like in cases where there are harmful acts, we understand harmful information to be situated within a broader information environment and socio-political context which may cause readers to experience resulting harms in varying ways. For example, referring to a prominent individual as a “traitor” may in some information environments and socio-political contexts result in that person being socially ostracized.

Figure 3. Information influencing harm without a harmful act.

In sum, harmful information may influentially contribute to harm, often (though not exclusively) through subsequent acts. However, harmful information, whether in the form of individual pieces of content or threads that form narratives, does not on its own cause harmful acts or resulting harm. Instead, information is consumed by users within a broader information environment and socio-political context that encourage users to interpret information in particular ways.

The typology of harms influenced by harmful information on social media in conflict settings

The typology presented in this article includes five categories of harm that may result from harmful information spreading on social media: physical, psychological, economic/financial, social/cultural and society-wide. These categories are presented in Table 1, together with the potential harmful acts covered by each category. In Tables 2–6, each category is arranged according to the subsequent harmful act that was or may have been influenced by the spread of harmful information on social media. This structure allows the reader to see how the identified harmful information might influentially contribute to particular harmful acts. Where there is no harmful act, that column is marked as [N/A] to indicate no applicable harmful act. Possible resulting harms are detailed in the furthest right-hand column. Each entry in every row is derived from an example from current or past armed conflicts, based on open-source information researched and decontextualized for the purpose of this typology.

Table 1. Harmful acts in each category potentially influenced by harmful information on social media

Importantly, in reality, harmful acts often result in a multitude of harms that cross various categories. For example, torture may lead not just to physical harm but also to psychological, financial, social and even society-wide harms. Similarly, harmful information spreading on social media may lead to a multitude of harmful acts, and not just one type of act. In extricating these harmful acts and resulting harms from each other, our intention is to clarify how each may be influenced by information spreading on social media platforms, and not to imply that certain information may have influentially contributed only to particular acts, or that an act only causes a limited range of harms.

Harms to the life and physical well-being of people affected by armed conflict

Harms to the life and physical well-being of people affected by conflict (who are often but not exclusively civilians) are typically readily identifiable, involving acts that result in a person's death, illness or physical injury (see Table 2).Footnote 25 The typology presented in this article identifies the following subsequent harmful acts in armed conflict settings that may result in physical harm and that may potentially be influenced by harmful information on social media: (i) small arms attacks or extrajudicial killings by individuals, communities, police, militia, armed groups and/or parties to an armed conflict; (ii) attacks amounting to genocide; (iii) attacks amounting to crimes against humanity; (iv) torture and other forms of ill-treatment; (v) sexual or gender-based violence; (vi) abductions, including but not limited to forced disappearances or other arbitrary deprivations of liberty; (viii) destruction of goods essential to the survival of the civilian population, such as food or water supplies; (ix) blocking or limiting of access to asylum procedures; (x) denial of life-saving humanitarian assistance; and (xii) taking incorrect and/or unsafe routes for evacuation.Footnote 26 Item (x) refers to situations where life-saving humanitarian and/or medical assistance is blocked or otherwise rendered inaccessible. This could happen through, for example, a physical blockade, the revocation of approval for certain organizations’ operations, or the targeting of humanitarian workers. It could also happen by skewing the decision-making of civilians who believe misinformation or disinformation about certain organizations and choose to avoid their services as a result.

Table 2. Physical harms potentially influenced by harmful information on social media

Harms to the economic or financial well-being of people affected by armed conflict

Economic or financial harms to people affected by armed conflict involve monetary losses incurred by individuals as well as broader economic harms on communities.Footnote 80 Those harms could include loss of financial resources, loss of physical property, impoverishment, lack of livelihood, lack of access to economic opportunities or lack of access to necessary services like housing, health care or childcare (see Table 3). Acts that could result in economic or financial harm in armed conflict settings and that may potentially be influenced by harmful information on social media include (i) theft, (ii) property destruction, (iii) blocking or limiting access to employment, (iv) forced evictions (which are cross-listed as social and cultural harms), (v) State-imposed restrictions on social media sites, broadband, telecommunications and/or the internet, and (vi) limiting access to essential services.

Table 3. Economic or financial harms potentially influenced by harmful information on social media

Harms to the psychological well-being of people affected by armed conflict

Psychological harms refer to a range of negative responses such as anxiety, fear of bodily harm or injury, fear of retaliation, powerlessness, depression, sleeplessness or an inability to make day-to-day decisions (see Table 4).Footnote 98 It is not uncommon for harms to physical well-being or economic harms of the kind described above to be accompanied by psychological harms.Footnote 99 Moreover, being subjected to a narrative or specific piece of content may cause harm in and of itself without an intermediary harmful act. Other harmful acts that may cause psychological harm in armed conflict settings and that may potentially be influenced by harmful information on social media include (i) security threats such as death threats or bomb threats, (ii) the denial of the occurrence of harmful events, and (iii) corpse desecration.

Table 4. Psychological harms potentially influenced by harmful information on social media

Harms to the social or cultural well-being of people affected by armed conflict

This category merges two closely related categories of harms: social harms and cultural harms. Harms to the social well-being of people affected by conflict include those that involve their relationships, life activities, and functioning in society.Footnote 110 They typically involve marking an individual or group as “blemished” so that they are viewed as “other” and thereby distinct from mainstream society.Footnote 111 Social harms in this context include reputational harm,Footnote 112 social ostracization, stigmatization and discrimination, while cultural harms include degrading or destroying peoples’ cultural identity, cultural practices or cultural expressions (see Table 5).Footnote 113 Acts in armed conflict settings that may be influenced by harmful information on social media and that may result in social or cultural harms include (i) acts of discrimination; (ii) deportation, displacement, forced evictions or refoulement; and (iii) desecration of cultural property. Note that here, harms in certain cases listed in Table 5 resulted from the harmful information itself without an intervening harmful act.

Table 5. Social or cultural harms potentially influenced by harmful information on social media

Society-wide harms

Finally, society-wide harms are those that cause societal unrest and destabilization at scale. They include epistemic insecurity, or the erosion of trust in truth, evidence and evaluative standards;Footnote 124 spreading of societal fear;Footnote 125 “chilling effects” that limit the exercise of certain civil liberties, such as free speech, political participation, religious activity, free association, freedom of belief and freedom to explore ideas;Footnote 126 lack of access to the internet; perpetuating instability or conflict; and entrenching into society disadvantages to and stigmatization of certain groups (see Table 6). The acts examined in this section were potentially influenced by harmful information on social media that can result in such society-wide harms, and include (i) restriction of access to the internet, (ii) patterns of discriminatory acts, and (iii) rejecting peace deals. Again, harms in certain cases resulted from the harmful information itself without an intervening act.

Table 6. Society-wide harms potentially influenced by harmful information on social media

Implications of the typology of harms

This typology is not intended to prove correlation or causation between the spread of a specific narrative or piece of content/post and a particular harm or harmful act. Instead, the typology illustrates how, in the wake of such harmful information spreading on social media, people experienced harm, many times through subsequent acts that appear to have been influenced by the harmful information. In doing so, the typology reinforces the notion that harmful information spreading in times of conflict must be treated as a humanitarian concern because it may influentially contribute to the unnecessary suffering of people affected by armed conflict, undermining their safety and well-being in the process.

While this typology does not attempt to outline possible interventions in response to harmful information spreading during situations of armed conflict, there are several important implications from the typology worth highlighting.

Harmful information is a protection risk

The spread of harmful information presents serious protection risks for people affected by armed conflict. As shown throughout this typology, harmful information can influentially contribute to violent behaviour, it can aggravate discrimination or persecution of minority communities, it can intensify psychological distress, and it can undermine people's ability to make informed decisions about their safety and well-being, to name a few.

These are risks requiring responses that focus not only on the information itself, but also on addressing or preventing potential harmful acts, including risk mitigation strategies that prioritize affected persons’ safety, well-being and resilience.Footnote 140 This means that strategies ought to focus not only on addressing or curbing the information itself, but also on protection approaches that address the behaviour of stakeholders in armed conflict and factors that exacerbate the vulnerabilities of affected people. As a first step, protection-focused organizations working in such contexts should incorporate the spread of harmful information in their risk assessments and protection operations. For other humanitarian organizations operating in such contexts, efforts to detect and/or understand the spread of harmful information in conflict settings will be critical to ensuring that their mandates are carried out. More broadly, protection work should encompass harms related to access to information or exposure to harmful information, and ensure that the full spread of potential harms from such information is addressed.

Addressing harmful information requires a conflict-specific approachFootnote 141

Armed conflict settings have fewer protective guardrails that, in times of peace, might otherwise prevent harmful information from spilling over into offline violence or other harmful acts. Responding to harmful information must therefore incorporate the unique risks and dynamics present during armed conflict by embracing a conflict-specific approach.Footnote 142

Such an approach may require, for example, focusing on information that may not be false, misleading or hateful but still poses serious protection risks.Footnote 143 For instance, political and war-related narratives that escalate existing tensions between two groups may not be false, misleading or hateful, but in armed conflict settings can still trigger acts of violence, and thus would likely fall within the category of harmful information. Another example is a narrative that discourages humanitarians from providing care to people who live in an area controlled by an opposition group. Such information is not necessarily false, misleading or hateful, but it may actively impact certain communities’ access to care, and thus could have deleterious effects. Information should be approached with these and other risks that are unique to armed conflict settings in mind.

A conflict-specific approach should also respect different identities and grievances, and should, in doing so, be sensitive to certain counter-speech or fact-checking interventions that could further fuel tensions. For example, recent research has found that fact-checks may further polarize attitudes if they include vitriolic or contentious language.Footnote 144 In an armed conflict, what language is considered vitriolic or contentious may vary depending on a person's identity or grievances, and sensitivity around such issues must be incorporated into any response strategy. One generic approach that may avoid such risks might simply be to prioritize access to reliable information and connectivity, taking active measures to address information vacuums that may otherwise give space for the spread of harmful information.Footnote 145 In addition, intervenors may seek to empower users directly, giving them tools that help them identify false or provocative narratives, raise their awareness about the risks associated with spreading harmful content, and assist with protecting their mental and psychological well-being.Footnote 146

Relatedly, a conflict-specific approach will require a deep understanding of the local socio-political context. This is important because, as discussed earlier in this article, information is not simply shared and interpreted in a vacuum. Instead, users share and interpret information within a particular socio-political context, and in an armed conflict setting, this context often includes underlying tensions and grievances. Understanding this context will help stakeholders, including content moderators, civil society actors, protection officials and humanitarians, determine which information is more likely to influentially contribute to harmful acts and thus react more effectively.

Finally, though social media platforms are not content generators themselves, their models and policies directly influence the reach, promotion or demotion of certain information.Footnote 147 Social media companies should adapt their policies to armed conflict situations to address the spread of information that is potentially harmful to people's physical safety and psychological integrity, as others have argued.Footnote 148 This may require, for example, serious investment in adequate content moderation capacities for particular languages, context-specific implementation of platform policies, and crisis response teams focused on particular armed conflict settings, equipped with a deep local understanding of that context and able to respond swiftly to identified harmful information that risks influencing harmful acts.Footnote 149

Harmful information could trigger certain provisions under international law

As detailed in the above typology, harmful information may influentially contribute to harm experienced by people affected by armed conflict, including but not limited to civilians. Some of these contributions could conceivably trigger certain provisions under international law, including under IHL, IHRL and international criminal law.Footnote 150 Such provisions include the IHL prohibition on threats of violence whose primary purpose is to spread terror among civilians;Footnote 151 the IHRL prohibition on speech that incites violence or discrimination, along with IHRL's prohibition on propaganda for war;Footnote 152 and the war crime under international criminal law that prohibits “willfully causing great suffering, or serious injury to body or health” for international armed conflicts.Footnote 153 Although spelling out all applicable international laws and their varying jurisdictional reaches is beyond the scope of this typology, responses to harmful information in times of armed conflict should connect various protective legal frameworks where relevant.

In addition, freedom of expression rights under IHRL should be incorporated into any response strategy. Admittedly, however, the proper balance between protecting freedom of expression rights and protecting people from other harms prohibited by IHRL that may be influenced by harmful information, such as torture or extrajudicial killings, is not altogether clear.Footnote 154 Indeed, each aim suggests seemingly opposite approaches for handling protection risks that arise from harmful information, and while IHRL acknowledges the potential need to curtail freedom of expression where the rights of others are implicated, it does not clarify the extent to which freedom of expression can or should be curtailed.Footnote 155 What is clear is that blunt information restrictions based on vague information categories would likely simply exacerbate the harm experienced by people affected by armed conflict, for example by restricting their access to reliable information.Footnote 156 Instead, strategies with clearly defined, limited restrictions curbing the spread of harmful information could minimize the impact on freedom of expression rights.Footnote 157

Finally, although many relevant international law provisions may not implicate social media companies directly,Footnote 158 such companies should further their own commitments under IHRL by conducting heightened due diligence, as recommended in the United Nations (UN) Guiding Principles on Business and Human Rights, and evaluating the human rights and humanitarian impacts of their operations in armed conflict contexts.Footnote 159

Harmful information may have long-term consequences for societies

Finally, the spread of harmful information in conflict settings can have consequences beyond an individual event or incident. As described in the typology, such long-term society-wide consequences include, but are not limited to, the erosion of trust in truth,Footnote 160 long-term societal spreading of fear,Footnote 161 and “chilling effects” that effectively silence people's speech and expression.Footnote 162 Of equal importance in the context of armed conflict, the spread of harmful information may also reinforce group stigmatization or promote patterns of discrimination and polarization between opposing sides.Footnote 163 These and other societal consequences may be unintended, delayed, or difficult to recognize, but they are important to highlight because they may have serious secondary repercussions in situations of armed conflict, like lowering prospects for conflict resolution or decreasing respect for IHL.Footnote 164

To safeguard societal resilience and protect people from repeated manipulation, investments should be made into programmes that reinforce critical thinking and equip people with the awareness, literacy and skills to address information-related risks themselves.Footnote 165 Information alone is not harmful, but it becomes so when people's grievances are exploited, their beliefs polarized, or their trust manipulated or otherwise diminished. Reinforcing people's ability to discern the information they consume should be a response priority, and particularly so during situations of armed conflict, when people and societies are vulnerable to information manipulation and exploitation. Doing so not only minimizes harmful effects from information now but also promotes societal resilience for the future.

Conclusion

Harm to people affected by conflict – including but not exclusively civilians – is often caused by the means and methods of warfare, along with supplementary violent behaviour by belligerents. Yet in the digital age, information-related threats pose additional protection risks to people affected by conflict. Harmful information, as mapped in the typology presented in this article, can contribute to a multitude of harms to individuals, communities and even societies at large during armed conflict. Harms may be physical, but they may also be economic or financial, psychological, cultural or social, or even society-wide. Information may influentially contribute to harmful acts, but it may also influentially contribute to harm in and of itself, with no intervening act at all. These information-related harms occur on top of harms already experienced from the inherently violent and destabilizing nature of conflict, increasing the suffering experienced by people caught in the middle of conflict. By mapping such information-related harms, this typology frames harmful information as a protection risk and not just a communications concern.

Though a full discussion of potential responses to harmful information that spreads during armed conflict is beyond the scope of this article, at a minimum, such responses must be conflict-specific. As detailed above, this means that curbing the spread of harmful information should be complemented by harm mitigation measures and efforts to address the behaviour of State and non-State actors who may act on harmful information. Efforts to improve access to connectivity and to trusted information in times of conflict should be prioritized. Finally, activities that foster media literacy and assist people in discerning false, hateful or misleading content may help to mitigate some of the harmful effects of harmful information, and ought to be pursued even before an armed conflict arises. More research should be pursued that measures the efficacy of certain interventions, but one thing is certain: harmful information produces a wide range of harms, and it should be taken seriously as a protection risk for civilians, prisoners of war and others caught in the middle of armed conflict.

Footnotes

The advice, opinions and statements contained in this article are those of the author/s and do not necessarily reflect the views of the ICRC. The ICRC does not necessarily represent or endorse the accuracy or reliability of any advice, opinion, statement or other information provided in this article.

References

1 See Dilshad Jaff, “Comment: The Surge of Spreading Harmful Information through Digital Technologies: A Distressing Reality in Complex Humanitarian Emergencies”, Lancet Global Health Journal, Vol. 11, No. 6, 2023, available at: https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(23)00207-3/fulltext (discussing how digital technologies are increasing the quantity of harmful information spreading during complex humanitarian emergencies like armed conflict) (all internet references were accessed in November 2024). By “social media platforms”, we mean websites that allow a person to become a registered user or otherwise create a personalized profile to create, share and view user-generated content. See 42 USC § 1862w(a)(2); see also Obar, Jonathan and Wildman, Steven, “Social Media Definition and the Governance Challenge: An Introduction to the Special Issue”, Telecommunications Policy, Vol. 39, No. 9, 2015CrossRefGoogle Scholar, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2647377. By “messaging applications”, we mean applications which enable individual messages, including text, audio or video, that are sent and received by individual users. See “Encyclopedia: Messaging App”, PC Mag, available at: www.pcmag.com/encyclopedia/term/messaging-app.

2 See, generally, Joelle Rizk, “Why is the ICRC Concerned by ‘Harmful Information’ in War?”, Humanitarian Law and Policy Blog, 10 September 2024, available at: https://blogs.icrc.org/law-and-policy/2024/09/10/why-is-the-icrc-concerned-by-harmful-information-in-war/ (describing how harmful information contributes to humanitarian consequences in war and may trigger acts that violate international humanitarian or human rights law); Chris Brew and Lauren Spink, “Disinformation Harms Civilians in Conflict in More Ways than You Thought”, CIVIC Blog, 20 August 2022, available at: https://civiliansinconflict.org/blog/disinformation-harms-civilians-in-conflict-in-more-ways-than-you-thought/ (discussing a range of potential civilian harms from disinformation in conflict settings, including disruption of humanitarian operations, manipulation of information about life-saving goods or services, and harms to civilians’ mental health). By “armed conflict”, we mean a situation of hostilities between States or, where a non-State actor is involved, where violence reaches an appropriate level of intensity. See International Committee of the Red Cross (ICRC), How is the Term “Armed Conflict” Defined in International Humanitarian Law?, ICRC Opinion Paper, 16 April 2024, available at: www.icrc.org/en/document/icrc-opinion-paper-how-term-armed-conflict-defined-international-humanitarian-law.

3 See Tilman Rodenhäuser and Samit D'Cunha, “IHL and Information Operations during Armed Conflict”, Articles of War, 18 October 2023, available at: https://lieber.westpoint.edu/ihl-and-information-operations-during-armed-conflict/ (discussing how IHL protects civilians and prisoners of war during information operations in armed conflict).

4 See, generally, Adrienne Brooks, “Scroll and Share: The Role of Social Media on Conflicts”, Mercy Corps Blog, 9 March 2022, available at: www.mercycorps.org/blog/role-social-media-conflicts (detailing how social media platforms spread information that exacerbates tensions, inflames conflict, and may spill over into offline violence during conflict settings, particularly during key events); Jonathan Stray, Ravi Iyer and Helena Puig Larrauri, “The Algorithmic Management of Polarization and Violence on Social Media”, Knight First Amendment Institute at Columbia University, 22 August 2023, available at: https://knightcolumbia.org/content/the-algorithmic-management-of-polarization-and-violence-on-social-media (finding, based on experimental research studies in psychology, that social media content can enhance divisiveness and escalate rather than de-escalate conflict, leading to, among other things, violent acts offline).

5 See, generally, J. Stray, R. Iyer and H. P. Larrauri, above note 4 (discussing how social media content may influence discriminatory views, which may further influence decision-making and actions offline).

6 See Sandrine Tiller, Pierrick Devidal and Delphine van Solinge, “The ‘Fog of War’ … and Information”, Humanitarian Law and Policy Blog, 30 March 2021, available at: https://blogs.icrc.org/law-and-policy/2021/03/30/fog-of-war-and-information/ (discussing how information – either a deluge of it or a lack of it – on social media may warp decision-making in conflict settings, causing people to make decisions that are harmful to their health or well-being).

7 See, generally, Dreibigacker, Arne, Muller, Philipp, Isenhardt, Anna and Schemmel, Jonas, “Online Hate Speech Victimization: Consequences for Victims’ Feelings of Insecurity”, Crime Science, Vol. 13, No. 4, 2024Google Scholar, available at: https://crimesciencejournal.biomedcentral.com/articles/10.1186/s40163-024-00204-y (finding empirical evidence that online hate speech causes fear and feelings of insecurity for targets of such speech); see also C. Brew and L. Spink, above note 2 (discussing negative mental health consequences for civilians in conflict settings who are exposed to disinformation, including fear, anxiety, confusion and sleeplessness).

8 See Citron, Danielle Keats and Solove, Daniel J., “Privacy Harms”, Boston Law Review, Vol. 121, 2022, p. 829Google Scholar (detailing the importance of a typology for privacy harms, which includes setting the legislative and regulatory agenda and ensuring that such regulations address the full scope of privacy harms).

9 J. Rizk, above note 2 (detailing the ICRC's approach). Note that the ICRC has a forthcoming response framework for harmful information in conflict settings that lays out this position.

10 For what counts as harmful information, see the section below on “Theories Underpinning the Typology”.

11 See e.g. D. Keats Citron and D. J. Solove, above note 8; Agrafiotis, Ioannis, Nurse, Jason, Goldsmith, Michael, Creese, Sadie and Upton, David, “A Taxonomy of Cyber-Harms: Defining the Impacts of Cyber-Attacks and Understanding How They Propagate”, Cybersecurity Journal, Vol. 4, No. 1, 2018CrossRefGoogle Scholar, available at: https://academic.oup.com/cybersecurity/article/4/1/tyy006/5133288?login=true; Samuele Dominioni and Giacomo Persipaoli, “A Taxonomy of Malicious ICT Incidents”, United Nations Institute for Disarmament Research, 25 July 2022, available at: https://unidir.org/publication/a-taxonomy-of-malicious-ict-incidents/.

12 For examples, see the section below on “The Typology of Harms Influenced by Harmful Information on Social Media in Conflict Settings”.

13 Misinformation is false information that is unintentionally spread by individuals who believe the information is true, or have not taken the time to verify it. See ICRC, Harmful Information: Misinformation, Disinformation and Hate Speech in Armed Conflict and Other Situations of Violence, Geneva, 2022, p. 7, available at: https://shop.icrc.org/harmful-information-misinformation-disinformation-and-hate-speech-in-armed-conflict-and-other-situations-of-violence-icrc-initial-findings-and-perspectives-on-adapting-protection-approaches.html?___store=en.

14 Disinformation is false information that is intentionally fabricated and/or disseminated, often with malicious intent. Ibid., p. 8.

15 Malinformation is true information spread with the intent to cause harm. Ibid., p. 8.

16 Hateful speech includes hate speech as well as other content that is inflammatory, derogatory or denigrating towards a group of people on the basis of their identity. Note that hate speech itself does not have a universally accepted definition, though it generally involves expressions that denigrate others on the basis of race, colour, religion, descent, nationality, ethnic origin, gender identity or sexual orientation. See ibid., p. 9; United Nations (UN), “United Nations Strategy and Plan of Action on Hate Speech”, 18 June 2019, p. 2, available at: www.un.org/en/genocideprevention/documents/UN%20Strategy%20and%20Plan%20of%20Action%20on%20Hate%20Speech%2018%20June%20SYNOPSIS.pdf. See also International Criminal Tribunal for Rwanda, The Prosecutor v. Ferdinand Nahimana et al., Case No. ICTR-99-52, Judgment (Appeals Chamber), 28 November 2007, para. 986 (defining hate speech as “denigration based on a stereotype”); Persak, Nina, “Criminalising Hate Crime and Hate Speech at EU Level: Extending the List of Eurocrimes under Article 83(1) TFEU”, Criminal Law Forum, Vol. 33, 2022CrossRefGoogle ScholarPubMed, pp. 91–92 (detailing European Union definitions of hate speech, which consider hate speech to be expressions of hatred on the grounds of certain protected categories); Catherine O'Regan, “Hate Speech Online: An (Intractable) Contemporary Challenge?”, Current Legal Problems, Vol. 71, No. 1, 2018, p. 404 (defining hate speech as speech that incites hatred or is degrading to individuals or groups based on certain attributes such as race, ethnic origin, gender identity or sexual orientation).

17 J. Rizk, above note 2; see also T. Rodenhäuser and S. D'Cunha, above note 3 (discussing limitations under IHL on information operations during armed conflict).

18 International organizations and humanitarian organizations may use different terminology to refer to information-related risks in humanitarian settings. For example, the UN uses “information integrity” to refer to “access to relevant, reliable and accurate information” along with “tolerance and respect in the digital space”, recognizing that digital technologies can facilitate the spread of information that is “harmful to societies and individuals”. UN, Global Digital Compact, September 2024, para. 33, available at: www.un.org/global-digital-compact/sites/default/files/2024-09/Global%20Digital%20Compact%20-%20English_0.pdf. Conversely, the UN Department of Peace Operations (DPO) uses “MDMH”, a shorthand acronym that refers to information that is false, misleading or hateful. See DPO, “Module 1: Addressing Misinformation, Disinformation, Malinformation, and Hate Speech Threats in United Nations Peace Operations for Military and Police Units”, United Nations Peacekeeping Resource Hub, p. 1, available at: https://peacekeepingresourcehub.un.org/en/training/rtp/mdmh. We use “harmful information” because it captures forms of information that may otherwise be ignored (such as information that violates IHL but is not false, misleading or hateful, like photos of prisoners of war) and it focuses on information that has the potential to produce harm.

19 See, generally, Tilman Rodenhäuser, “The Legal Boundaries of (Digital) Information or Psychological Operations under International Humanitarian Law”, International Law Studies, Vol. 100, No. 1, 2023, pp. 546, 572 (discussing lawful uses of disinformation, including for propaganda purposes, under IHL, along with potentially unlawful uses of disinformation for propaganda where such propaganda involves an unlawful ruse).

20 [Citation redacted] (describing the accusatory narrative targeting a man from minority group, and the attacks that followed).

21 Ibid.

22 [Citation redacted].

23 See Rachel Xu, “You Can't Handle the Truth: Misinformation and Humanitarian Action”, Humanitarian Law and Policy Blog, 15 January 2021, available at: https://blogs.icrc.org/law-and-policy/2021/01/15/misinformation-humanitarian/ (“False information in isolation is relatively harmless; it becomes harmful when humans absorb information, interpret it with their own biases and prejudices, and are compelled to action”); UK Information Commissioner's Office, “Overview of Data Protection Harms and the ICO's Taxonomy”, April 2022, p. 2, available at: https://ico.org.uk/media/about-the-ico/documents/4020144/overview-of-data-protection-harms-and-the-ico-taxonomy-v1-202204.pdf; Irene Khan, Disinformation and Freedom of Opinion and Expression during Armed Conflicts: Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, UN Doc. A/77/288, 12 August 2022, para. 27, available at: www.ohchr.org/en/documents/thematic-reports/a77288-disinformation-and-freedom-opinion-and-expression-during-armed.

24 [Citation redacted] (reporting on statements made by a politician on social media calling for a village inhabited by civilians of an opposing side to be “erased”, which was followed by violent riots causing death and property destruction).

25 See, generally, D. Keats Citron and D. J. Solove, above note 8, p. 831.

26 Note that unlike many other subsequent harmful acts, this act is carried out by the affected person and not a third party.

27 [Citation redacted]; see also [citation redacted]. For examples beyond conflict settings, see [citation redacted] (misinformation across WhatsApp in a country that is not experiencing armed conflict about supposed “child lifting gang[s]” that were coming to steal children and were armed with “sedatives, injections, spray, cotton, and small towels”); [citation redacted] (misinformation about individuals who were believed to be child snatchers in a country that is experiencing unrest but not armed conflict).

28 [Citation redacted] (detailing how a prominent professor of [redacted] ethnicity was killed by community members following “a string of hateful messages … that slandered and doxed the professor”).

29 [Citation redacted] (describing a “disinformation campaign” targeting the prominent person based on his ethnic background, and accusing him of supporting certain political beliefs shared by other people in his ethnic group; this campaign ended in his murder by individuals).

30 [Citation redacted] (same).

31 [Citation redacted] (reporting on this false information and the consequences of the information, which involved mob attacks against the minority community).

32 [Citation redacted] (reporting on these social media posts in which elites called for a village to be “erased”, which led to violent attacks by members of the majority community who identified with the elites).

33 [Citation redacted] (reporting on hateful narratives that are “spread[ing] hatred and encourag[ing] ethnic polarization”, and are, according to some researchers, accompanied by violent attacks against individuals or communities based on their perceived membership).

34 [Citation redacted] (detailing how this narrative spread on social media in a particular armed conflict).

35 [Citation redacted] (detailing how the murder of a prominent singer and member of an ethnic minority group triggered retaliation and ethnic violence led by mobs, which was further fuelled by additional hate speech and disinformation); [citation redacted] (describing the public doxing and demand for the murder of a prominent professor by community members); [citation redacted] (describing rumours that a certain religious group had launched a plot to kill all members of a religious majority group, which fuelled harmful interpretations of viral posts and eventually led to mob violence against the religious group); [citation redacted] (describing how false news about a supposed planned massacre was used to “mobilize others to take up arms to counter the ‘attack’”).

36 [Citation redacted] (reporting on this misinformation, and further reporting on digital researchers who found spikes in this and other types of false and hateful content in the days preceding attacks perpetrated by government forces and police).

37 [Citation redacted] (reporting on this harmful narrative, and further reporting on digital researchers who found spikes in this and other types of false and hateful content in the days preceding attacks perpetrated by government forces and police).

38 [Citation redacted] (same).

39 [Citation redacted] (detailing examples of harmful narratives, including this one, that were circulating during an armed conflict which included attacks by military forces against civilians).

40 [Citation redacted] (describing this harmful content circulating on social media during an armed conflict in which one side was targeting populated areas of the other).

41 [Citation redacted] (detailing examples of harmful narratives, including this one, that were circulating during an armed conflict which included attacks by military forces against civilians).

42 [Citation redacted] (detailing examples of harmful narratives, including this one, that were circulating during the same armed conflict, in which military forces were accused of attacking civilians).

43 [Citation redacted] (describing findings of ethnic cleansing by security forces against a minority group, which followed rumours that the minority was going to make the majority group “slaves”); [citation redacted] (detailing security forces’ extrajudicial killings against a minority group, the subject of significant hate speech and disinformation in a country experiencing armed conflict); [citation redacted] (describing the mass killing of hundreds of civilians, amounting to what many are calling a war crime); [citation redacted] (reporting that the massacre of hundreds of civilians by troops is a war crime and may amount to a crime against humanity).

44 [Citation redacted] (documenting the increase in harmful narratives and pieces of content spreading across social media spreading anti-[redacted] sentiment, and later reporting attacks that amounted to genocide occurring around the same time that such harmful information was circulating).

45 [Citation redacted] (finding through independent investigation that this harmful narrative was circulating around the same time that attacks amounting to genocide were targeting the same minority community).

46 [Citation redacted] (finding through desk research attacks amounting to genocide performed by armed forces against the targeted minority group).

47 [Citation redacted] (reporting on this harmful narrative and similar narratives that spread on social media, and reporting that the death of the prominent singer kicked off the killings of hundreds of people, many of whom were minorities).

48 [Citation redacted] (finding that harmful narratives, including this one, proliferated across social media around the same time that attacks reportedly amounting to crimes against humanity occurred).

49 [Citation redacted] (reporting on this harmful narrative that circulated on social media around the same time that attacks were committed by a government-supported militia against civilians, most of them members of the same targeted minority group, and referring to these attacks as crimes against humanity).

50 [Citation redacted] (describing acts of violence against a minority group following calls for their “extermination” on social media and reporting that some experts are calling these acts ethnic cleansing and a crime against humanity); [citation redacted] (detailing the crimes against humanity committed against a minority population that was targeted by campaigns spreading harmful information against them on social media); [citation redacted] (detailing probable ethnic cleansing by security forces against a minority group).

51 [Citation redacted] (reporting on this narrative that spread on social media platforms during an armed conflict in which later allegations emerged of torture against the targeted side).

52 [Citation redacted] (detailing survivor – mostly prisoners of war – stories of abuse during unlawful detentions by a warring party, including allegations of torture).

53 [Citation redacted] (describing, through open-source research, social media narratives, including this one, spreading during the same time that sexual or gender-based violence was committed against the targeted minority community).

54 [Citation redacted] (describing the term “[redacted]”, which is a belittling term used towards women of a minority group); [citation redacted] (detailing a survivor's experience, where her attacker used the term “[redacted]” to belittle her before raping her).

55 [Citation redacted] (detailing sexual and gender-based violence committed against an ethnic minority group, which happened at the same time that harmful narratives were circulating on social media describing women of the minority group in derogatory ways); [citation redacted] (detailing instances of sexual abuse in active armed conflict settings); [citation redacted] (detailing sexual and gender-based violence committed by members of armed forces against an ethnic minority group; the abuse occurred at the same time that harmful information was circulating on social media targeting the minority group).

56 [Citation redacted] (detailing this harmful narrative targeting one side of the armed conflict, which occurred in a conflict during which armed forces of the other side are accused of forced disappearances).

57 [Citation redacted] (reporting this harmful narrative, from an armed conflict in which members of the targeted side were reported to have been abducted or to have suffered from enforced disappearances by the other side).

58 [Citation redacted] (detailing examples circulating on social media platforms that belittled, stigmatized or demonized a group's national identity, which circulated during an armed conflict in which that side was reported to have suffered from abductions and enforced disappearances by the other side).

59 [Citation redacted] (detailing this harmful narrative along with many others that circulated during an armed conflict in which there were reports of abductions and enforced disappearances against that minority group).

60 [Citation redacted] (describing false reporting on social media that led to arbitrary arrests of accused persons). Note that this particular context may not be considered an armed conflict by all, but it has been classified as an armed conflict by the Stockholm International Peace Research Institute, and given the significant organized violence, it is thus included in the typology.

61 [Citation redacted] (detailing enforced disappearances in an armed conflict during which harmful narratives circulated claiming that civilians whose nationality was the same as the opposing side were evil); [citation redacted] (detailing abductions and enforced disappearances in an armed conflict in which harmful narratives circulated targeting one side, including declaring that their soldiers are driving the other side away “like dogs” and saying that the other side represents a colony); [citation redacted] (detailing reported abductions committed by armed forces against a minority group in an armed conflict that was fought while harmful information circulated on social media targeting that minority group, including speech accusing the minority group of all being thieves); [citation redacted] (reporting on arbitrary arrests following false accusations on social media).

62 [Citation redacted] (describing how “hate speech” spreading on social media is fuelling an armed conflict and mistreatment of the targeted ethnic minority group, and reporting that starvation is occurring amongst the same targeted ethnic minority group, which is suffering under a government-imposed blockade).

63 [Citation redacted] (reporting on this narrative circulating on social media targeting an ethnic minority group in an armed conflict during which a blockade was instituted by State armed forces).

64 [Citation redacted] (detailing hateful and derogatory social media content targeting an ethnic minority which is suffering under a government-imposed blockade preventing aid from arriving); [citation redacted] (describing intentional destruction of crops and seeds, and the resulting widespread starvation, in the region where the aforementioned targeted ethnic minority lives); [citation redacted] (detailing a blockade depriving significant portions of the population from necessary food, including the targeted minority group, which was targeted in a harmful narrative calling for its extermination).

65 [Citation redacted] (reporting the spread of false information on social media that refugees from an armed conflict are intending to carry out attacks on the receiving country).

66 [Citation redacted] (reporting on this harmful narrative in a conflict in which the government blocked members of an ethnic minority from leaving to seek asylum).

67 [Citation redacted] (describing government efforts to prevent members of an ethnic minority from leaving the country by denying them access to airports); [citation redacted] (describing the closing of borders for refugees fleeing an armed conflict who were the target of false harmful information circulating on social media alleging that they were planning attacks against receiving countries); [citation redacted] (describing a new law that will require [redacted] refugees to pay for part of their accommodations if they stay in [redacted] for a certain period of time).

68 See Office of the UN High Commissioner for Refugees, “UNHCR Warns Asylum under Attack at Europe's Borders, Urges End to Pushbacks and Violence against Refugees”, press release, 28 January 2021, available at: www.unhcr.org/us/news/news-releases/unhcr-warns-asylum-under-attack-europes-borders-urges-end-pushbacks-and-violence.

69 [Citation redacted] (describing the [redacted] government's decision to suspend licenses for certain humanitarian groups due to supposed biases in favour of an ethnic minority group, including peddling arms for rebels, claims of which also circulated on social media); and see, generally, R. Xu, above note 23.

70 [Citation redacted] (reporting false information spread online alleging that humanitarians brought a deadly virus into a country).

71 [Citation redacted] (describing a disinformation campaign labelling a rescue group as a terrorist organization).

72 [Citation redacted] (describing [redacted] government's three-month ban of two humanitarian organizations due to alleged “misinformation”, including expressing concern about the destruction of refugee camps and lack of aid reaching populations in need, along with being biased in favour of one ethnic group).

73 [Citation redacted] (detailing false information circulating on social media targeting the humanitarian organization, which, according to the organization, put aid workers at risk and threatened to disrupt operations, potentially cutting people off from necessary aid).

74 [Citation redacted] (detailing a government's disinformation about the leader of a humanitarian organization); [citation redacted] (similarly detailing the same government's disinformation about the humanitarian organization leader and falsely describing how the organization peddled weapons to an opposing armed group and demanding an investigation).

75 See [citation redacted] (describing how armed groups attack humanitarian treatment centres and use the false information as justification for their attacks); [citation redacted] (noting that misinformation targeting a humanitarian organization is jeopardizing that organization's ability to deliver needed humanitarian assistance); [citation redacted] (reporting that the [redacted] government has created a de facto humanitarian blockade by preventing life-saving medicine and food from reaching a rebel-held area); and see, generally, ICRC, above note 13, p. 11 (describing how misleading information online about life-saving services or resources could result in civilians being misdirected away from help and potentially towards harm).

76 [Citation redacted] (describing the [redacted] government using starvation as a weapon of warfare through the creation of a humanitarian blockade).

77 [Citation redacted] (describing disinformation about the timing and location of evacuation corridors, which in turn, according to the report, caused civilians to take incorrect or unsafe evacuation routes).

78 [Citation redacted] (describing false information about the [redacted] government's targeting of the evacuation corridors or the government turning away fleeing civilians).

79 [Citation redacted] (describing such false information about evacuation corridors leading to confusion and likely causing civilians to make decisions “against their best interest in terms of safety and security” about when to flee and the proper route to take).

80 See, generally, D. Keats Citron and D. J. Solove, above note 8, p. 834.

81 [Citation redacted] (detailing this harmful narrative made up of hateful speech against an ethnic minority group on social media, which circulated in a conflict during which theft and looting against that minority group occurred).

82 [Citation redacted] (describing theft and looting that targeted the same minority group, which happened around the same time that hateful content against the group was circulating on social media).

83 [Citation redacted] (detailing this harmful narrative targeting the man of a minority religious identity, which was followed by destruction of property owned by minority community members); [citation redacted] (detailing the same incidents).

84 [Citation redacted] (reporting on photos of property destruction committed against ethnic minority members with captions encouraging others to participate that circulated on social media).

85 [Citation redacted] (describing misinformation about a false plot being planned by a minority group that spread rapidly on social media and the deadly altercation that preceded the misinformation, along with subsequent property destruction against the minority group).

86 [Citation redacted] (reporting a harmful narrative that incited violence against a minority group, which was then followed by property destruction of property owned by the minority group).

87 [Citation redacted] (describing burning and widespread property destruction in a minority town led by a majority group mob inspired by rumours, misinformation and incitement on social media, including photos of the ongoing destruction with encouragements to join in); see also [citation redacted] (describing burning of minority group property following malinformation of property destruction).

88 [Citation redacted] (detailing such content that circulated on social media targeting an ethnic minority group in a conflict setting, during which members of the group were denied employment access and had their bank accounts suspended by the government).

89 [Citation redacted] (describing the [redacted] government suspending bank accounts opened at branches in a rebel area and recalling or telling members of the ethnic group living in the area not to come to work, particularly those employed in certain State agencies such as embassies or the police).

90 [Citation redacted] (detailing such posts that circulated on social media which were targeted at a minority community, who were later forcibly evicted from their homes by government forces).

91 [Citation redacted] (detailing forced evictions of an ethnic minority community around the same time as harmful narratives circulated advocating for the removal of the community from that land).

92 [Citation redacted] (detailing this rapid increase in harmful information against an ethnic minority group, examples of which can be seen throughout this typology; the proliferation of such content precipitated a government-initiated internet shutdown).

93 [Citation redacted] (reporting, in the same armed conflict context, a government-imposed internet shutdown allegedly to curb harmful information spreading on social media. Note that this subsequent harmful act is cross-listed below in the society-wide harms category).

94 [Citation redacted] (describing, in the same armed conflict in which the government imposed an internet shutdown, the loss of revenue from people who could not access virtual banking or send or receive money from abroad).

95 [Citation redacted] (reporting this false information spread on social media claiming that refugees fleeing an armed conflict were planning violent attacks against host countries, which was followed by host countries limiting access to asylum).

96 [Citation redacted] (describing false information circulating on social media that falsely showed refugees attacking individuals in their host countries). This example occurred in a refugee-hosting country where there is no ongoing armed conflict. However, it is included in the typology as it depicts an example of how the impact of harmful information on people affected by armed conflict may continue outside of the situation of armed conflict.

97 [Citation redacted] (describing the [redacted] government's plans to require refugees to cover half of their accommodation costs, and 75% if they stay in [redacted] for longer than a certain period of time).

98 This range of psychological harms is largely captured in Amnesty International's study on the effects of misinformation, disinformation and hate speech on women. See Amnesty International, “Toxic Twitter: The Psychological Harms of Violence and Abuse against Women Online”, 20 March 2018, available at: www.amnesty.org/en/latest/news/2018/03/online-violence-against-women-chapter-6-6/.

99 While not the topic of this research, note that some scholarship has indicated that psychological distress itself causes physical harm. See e.g. Ryan Shandler, Michael Gross and Daphna Canetti, “Cyberattacks, Psychological Distress, and Military Escalation: An Internal Meta-Analysis”, Global Studies Journal, Vol. 8, No. 1, 2023, available at: https://academic.oup.com/jogss/article/8/1/ogac042/6988925.

100 [Citation redacted] (detailing the doxing of a prominent professor, who was then murdered by a mob); [citation redacted] (describing the circulation on social media of a list of human rights activists and calling for their prosecution with a death sentence if found guilty of “apostasy”, following the arbitrary arrest of several human rights activists, and detailing several human rights organizations that are being targeted with threats).

101 [Citation redacted] (reporting on a [redacted] social media narrative that accused non-supporters of a political leader of being homosexuals and of supporting an autonomous rebel area, in an armed conflict setting in which other prominent individuals have been doxed and attacked).

102 [Citation redacted] (describing death threats being shared publicly on social media targeting a professor who was doxed and subsequently killed by a mob in an armed conflict).

103 This list of harms was generated using Amnesty International's study. See Amnesty International, above note 98.

104 [Citation redacted] (reporting videos circulating on social media depicting signs of torture during an armed conflict and detailing the victims’ family members’ experiences of viewing these videos).

105 [Citation redacted] (describing the circulation of videos depicting torture and abuse viewed by family members during an armed conflict).

106 [Citation redacted] (detailing disinformation denying the occurrence of a well-documented massacre).

107 [Citation redacted] (describing emotional challenges among a nationality where family members living in the opposing side's country do not believe that harms resulting from the war are occurring).

108 [Citation redacted] (reporting, in the same armed conflict, how the president's declaration that the opposing side were being driven away “like dogs” circulated around social media).

109 [Citation redacted] (describing credible reports and proof of corpse desecration by troops of one country against dead troops of the opposing side).

110 See D. Keats Citron and D. J. Solove, above note 8, p. 859; Victoria Canning and Steve Tombs, “A Provisional Typology of Harm”, in Victoria Canning and Steve Tombs, From Social Harm to Zemiology: A Critical Introduction, Routledge, Abingdon, 2021, p. 78.

111 Simon Pemberton, Harmful Societies: Understanding Social Harm, Policy Press, Bristol, 2015, p. 31.

112 Reputational harm involves injuries to an individual's reputation and standing in the community, impairing their ability to maintain “personal esteem in the eyes of others” and potentially resulting in lost business, lost employment or social rejection. US law treats reputational harms as distinct forms of harm separate from physical or property injuries. D. Keats Citron and D. J. Solove, above note 8, p. 837; see also US Supreme Court, Rosenblatt v. Baer, 383 U.S. 75, 92 (1966) (J. Stewart, concurring).

113 See V. Canning and S. Tombs, above note 110.

114 [Citation redacted] (reporting common harmful narratives that spread during a particular armed conflict, including posts that referred to opponents or peace supporters as traitors).

115 [Citation redacted] (detailing harmful narratives on social media in the same armed conflict targeting a person based on their religious identity).

116 [Citation redacted] (reporting, in the same armed conflict, social media posts accusing certain members of an ethnic group of being lepers). Note that some people who use this term in a derogatory way may erroneously believe that leprosy is hereditary and that it is passed down within a particular ethnic group: see [citation redacted] (reporting this fact).

117 [Citation redacted] (detailing this harmful narrative, which circulated on social media at the same time that acts of discrimination against the targeted minority group were occurring; this included blocking them from citizenship rights, which limited their ability to work or obtain other legal rights, among other consequences).

118 [Citation redacted] (detailing acts of discrimination against the targeted minority group). Note that acts of discrimination also cause harm to society and are thus cross-listed below as society-wide harms.

119 [Citation redacted] (finding harmful narratives circulating on social media that called for the removal of a minority group, which was followed by the actual forced removal and displacement of the minority group and was found by independent researchers to have been directly linked to their forced removal).

120 [Citation redacted] (detailing this harmful narrative, which was followed by the forced displacement of the targeted ethnic group).

121 [Citation redacted] (reporting displacement and deportation in the same armed conflict setting as that in which the narratives circulated advocating for the group's removal; independent researchers found a connection between the two); [citation redacted] (detailing widespread displacement of a targeted ethnic group, about which harmful narratives circulated referring to them as “settlers” and “settler-colonists”); see also [citation redacted] (reporting that the highest number of displacements in that year were in the same armed conflict setting).

122 [Citation redacted] (listing harmful pieces of content that circulated during the armed conflict, including a piece of content that, as quoted here, appeared to advocate for destruction of the cultural property of one side, during an armed conflict in which there was cultural property destroyed belonging to that side of the conflict).

123 [Citation redacted] (describing alleged destruction of an opposing side's church, monuments and graves, photos of which were then shared on social media; this is the same side that was targeted in the harmful narrative described above).

124 Kristina Hook and Ernesto Verdeja, “Social Media Misinformation and the Prevention of Political Instability and Mass Atrocities”, Stimson Center, 7 July 2022, p. 44, available at: www.stimson.org/2022/social-media-misinformation-and-the-prevention-of-political-instability-and-mass-atrocities/.

125 See I. Khan, above note 23, para. 26.

126 D. Keats Citron and D. J. Solove, above note 8, p. 854.

127 See e.g. [citation redacted] (describing widespread disinformation campaigns by the government and rebel groups, including but not limited to a viral video announcing incorrectly that a major town had been retaken by the government); [citation redacted] (describing government forces’ downplaying of how long the war was likely to last, undermining civilian trust in government communications).

128 [Citation redacted] (describing a government's denial of a UN report detailing the blockading of humanitarian assistance and forced starvation against civilians, amounting to possible crimes against humanity).

129 Leila Bendra and Dominik Lehnert, In the Shadow of Violence: The Pressing Needs of Sudanese Journalists, UNESCO, October 2023, available at: www.unesco.org/en/articles/shadow-violence-pressing-needs-sudanese-journalists (finding that, out of 213 interviewed journalists, 51% experienced digital threats, undermining independent media and journalists’ ability to report factually accurate stories); [citation redacted] (describing social media posts expressing dissenting views by civil society leaders being used as justification for detention).

130 [Citation redacted] (describing a hashtag campaign in a country, during an armed conflict, which labelled opponents of the armed conflict as “traitors”).

131 [Citation redacted] (detailing the doxing of a prominent professor during an armed conflict).

132 [Citation redacted] (describing false or misleading information on social media favouring one side that alleged – with no evidence – that “[redacted] authorities are disproportionately conscripting people from [redacted] and unfairly rationing electricity between western and eastern regions of the country”).

133 [Citation redacted] (describing a surge in hate speech and incitement of violence in an armed conflict on a social media platform following the killing of a prominent figure).

134 [Citation redacted] (describing the blocking of social media platforms in a country during an eruption of armed conflict); [citation redacted] (reporting on a two-year-long internet outage in a country where there was armed conflict, and the repercussions of such a long blackout).

135 [Citation redacted] (detailing harmful narratives, using speech quoted here, that were targeted at a minority group; the group was later targeted by the State with a series of discriminatory acts).

136 For examples of discriminatory acts, see the section above on “Harms to the Social or Cultural Well-Being of People Affected by Armed Conflict”.

137 See D. Keats Citron and D. J. Solove, above note 8, p. 855.

138 [Citation redacted] (researching a peace deal in a country nearly transitioning out of armed conflict, and discovering a disinformation campaign, much of it shared on social media, that seemed to lead to voters rejecting the peace deal).

139 [Citation redacted] (describing voters’ rejection of a peace deal, which was fuelled by disinformation about the effects of the deal).

140 See e.g. Global Protection Cluster, “Global Protection Risks: Disinformation and Denial of Access to Information”, available at: https://globalprotectioncluster.org/Disinformation (describing disinformation and denial of access to information as a protection risk and noting that disinformation or information access denial are often “the cause or driver of the other 14 protection risks”); Keith Proctor, “Social Media and Conflict: Understanding Risks and Resilience”, Mercy Corps, July 2021, available at: www.mercycorps.org/sites/default/files/2021-08/Digital-Conflict-Research-Summary-and-Policy-Brief-073021.pdf (recommending that humanitarian organizations addressing social media content also incorporate components that focus on offline harm risks); Chris B. and L. Spink, above note 2 (noting that false or hateful content on social media presents protection risks for civilians).

141 See J. Rizk, above note 2 (recommending a “conflict-specific approach” to harmful information that considers contextual factors which are unique to conflict settings).

142 See Samidh Chakrabarti and Rosa Birch, “Understanding Social Media and Conflict”, Meta, 20 June 2019, available at: https://about.fb.com/news/2019/06/social-media-and-conflict/ (noting the unique risks that social media content presents in conflict settings or areas at risk of conflict); see also J. Rizk, above note 2.

143 See, generally, S. Chakrabarti and R. Birch, above note 142 (observing that “borderline” content which does not violate Meta's policies may be more likely to produce “serious consequences” in conflict settings).

144 Yamil Ricardo Velez and Patrick Liu, “Confronting Core Issues: A Critical Assessment of Attitude Polarization Using Tailored Experiments”, American Political Science Review FirstView, 8 August 2024, available at: https://tinyurl.com/2wv99pez (finding through a tailored experiment which exposed participants to information counter to their core issue positions that attitudes were polarized when the counter-information included “vitriolic” or “contentious” arguments).

145 See e.g. Melissa Carlson, Laura Jakli and Katerina Linos, “Rumors and Refugees: How Government-Created Information Vacuums Undermine Effective Crisis Management”, International Legal Studies Quarterly, Vol. 62, No. 3, 2018, p. 672, available at: https://academic.oup.com/isq/article/62/3/671/5076384 (finding that information vacuums, even ones that are unintentionally created, provide fertile ground for rumours and misinformation to spread).

146 See Arash Ziapour et al., “The Role of Social Media Literacy in Infodemic Management: A Systematic Review”, Frontiers in Digital Health, Vol. 6, 14 February 2024, available at: www.ncbi.nlm.nih.gov/pmc/articles/PMC10899688/ (surveying recent scholarship and finding that media literacy programmes for the general public present a promising and effective mechanism for combating false information and enabling users to differentiate false from true content); see also Malhotra, Pranav et al., “User Experiences and Needs When Responding to Misinformation on Social Media”, Harvard Kennedy School Misinformation Review, Vol. 4, No. 6, 2023Google Scholar, available at: https://misinforeview.hks.harvard.edu/article/user-experiences-and-needs-when-responding-to-misinformation-on-social-media/ (finding that social media users who identify misinformation highlight emotional labour as one of the impacts of such information, and desire tools that help support responses which take into account emotional and relational context).

147 See “Types of Content We Demote”, Meta Transparency Center, 16 October 2023, available at: https://transparency.meta.com/features/approach-to-ranking/types-of-content-we-demote (describing Meta's policies for demoting content that might be “problematic or low quality”); “About Boosted Posts”, Meta Business Help Center, available at: https://tinyurl.com/mr3jv57c (detailing Meta's post boost system); see also Kaitlyn Regehr, Caitlin Shaughnessy, Minzhu Zhao and Nicola Shaughnessy, Safer Scrolling: How Algorithms Popularise and Gamify Online Hate and Misogyny for Young People, University College of London and University of Kent, 2024, available at: www.ascl.org.uk/ASCL/media/ASCL/Help%20and%20advice/Inclusion/Safer-scrolling.pdf (detailing how algorithms on social media recommend harmful content, which then harms young people who see it); Sikudhani Foster-McCray, “Comment: When Algorithms See Us: An Analysis of Biased Corporate Social Media Algorithm Programming and the Adverse Effects These Social Media Algorithms Create When They Recommend Harmful Content to Unwitting Users”, Southern Journal of Policy and Justice, Vol. 18, 2024, pp. 13–27 (detailing how algorithms on social media platforms push negative and racial content which harms Black users).

148 See e.g. Paige Collings and Jillian C. York, “Social Media Platforms Must Do Better when Handling Misinformation, Especially during Conflict”, Electronic Frontier Foundation, 17 October 2023, available at: www.eff.org/deeplinks/2023/10/social-media-platforms-must-do-better-when-handling-misinformation-especially (recommending that social media platforms institute conflict awareness training for staff, senior decision-makers and content monitoring specialists, and collaborate with experts to improve internal crisis monitoring teams).

149 See Lauren Spink, When Words become Weapons: The Unprecedented Risks to Civilians from the Spread of Disinformation in Ukraine, Center for Civilians in Conflict, November 2023, pp. 45–47 (detailing Meta's crisis response team focused specifically on Ukraine, and recommending similar structures for other conflicts); see also K. Hook and E. Verdeja, above note 124 (recommending that social media companies provide greater investments for local partners in the global South); P. Collings and J. C. York, above note 148 (recommending that social media platforms invest in relevant language expertise when armed conflicts arise).

150 See Eian Katz, “Liar's War: Protecting Civilians from Disinformation during Armed Conflict”, International Review of the Red Cross, Vol. 102, No. 914, 2021, available at: https://international-review.icrc.org/articles/protecting-civilians-from-disinformation-during-armed-conflict-914 (detailing the possible implications of disinformation spreading during armed conflict under IHL, IHRL and international criminal law).

151 Protocol Additional (I) to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts, 1125 UNTS 3, 8 June 1977 (entered into force 7 December 1978) (AP I), Art. 51; Protocol Additional (II) to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts, 1125 UNTS 609, 8 June 1977 (entered into force 7 December 1978) (AP II), Art. 13; see also E. Katz, above note 150.

152 International Covenant on Civil and Political Rights, 993 UNTS 3, 16 December 1966 (ICCPR), Art. 20(2).

153 Rome Statute of the International Criminal Court, 2187 UNTS 90, 17 July 1998, Art. 8(2)(a)(iii); see also E. Katz, above note 150.

154 ICCPR, above note 151, Arts 19(2) (freedom of expression), 7 (prohibiting torture), 6 (prohibition on arbitrary deprivation of life).

155 Ibid., Art. 19(3); see also Article 19, Content Moderation and Freedom of Expression Handbook, August 2023, pp. 10–11, available at: www.article19.org/wp-content/uploads/2023/08/SM4P-Content-moderation-handbook-9-Aug-final.pdf (articulating a test that freedom of expression limitations should be necessary and proportionate, meaning that they should “demonstrate a direct and immediate connection between the expression” and the protection interest, and should opt for a less intrusive measure if available, although noting that hate speech presents unique complications).

156 See Article 19, “Response to the Consultation of the UN Special Rapporteur on Freedom of Expression on Her Report on Challenges to Freedom of Opinion and Expression in Times of Conflicts and Disturbances”, 19 July 2022, pp. 10, 12, available at: www.ohchr.org/sites/default/files/documents/issues/expression/cfis/conflict/2022-10-07/submission-disinformation-and-freedom-of-expression-during-armed-conflict-UNGA77-cso-article19.pdf. (recommending that States avoid unduly broad restrictions on information during armed conflict, and alleging that prohibitions are an ineffective response to wartime propaganda).

157 See e.g. Amnesty International, A Human Rights Approach to Tackle Disinformation, 14 April 2022, p. 13 (encouraging protection of the right to freedom of expression and encouraging the spread of accurate information, rather than simply cracking down on vague, poorly defined categories of content like “false news”).

158 See, generally, Jonathan Horowitz, “One Click from Conflict: Some Legal Considerations Related to Technology Companies Providing Digital Services in Situations of Armed Conflict”, Chicago Journal of International Law, Vol. 24, No. 2, 2024 (detailing possible legal implications for tech company employees and properties under IHL).

159 See Office of the UN High Commissioner for Human Rights, Guiding Principles on Business and Human Rights, 2011, Principle 7; see also Isedua Oribhabor, “Tech and Conflict: A Guide for Responsible Business Conduct”, AccessNow, 20 October 2023, available at: www.accessnow.org/guide/tech-and-conflict-a-guide-for-responsible-business-conduct/#conduct-during-conflict/.

160 K. Hook and E. Verdeja, above note 124, p. 44; see also I. Khan, above note 23, para. 26

161 See I. Khan, above note 23, para. 23.

162 See Article 19, above note 156, p. 10 (describing common government tactics of using disinformation as an excuse to silence dissent).

163 See the section above on “Society-Wide Harms” (detailing heightened stigmatization from harmful narratives spreading on social media platforms); Pramukh Nanjundaswamy Vasist, Debashis Chatterjee and Satish Krishnan, “The Polarizing Impact of Political Disinformation and Hate Speech: A Cross-Country Configural Narrative”, Information Systems Frontiers, Vol. 26, 17 April 2023, p. 1, available at: www.ncbi.nlm.nih.gov/pmc/articles/PMC10106894/ (surveying 177 countries and finding that hate speech and political disinformation catalyze societal polarization).

164 See, generally, Eduardo Albrecht, Eleonore Fournier-Tombs and Rebecca Brubaker, Disinformation and Peacebuilding in Sub-Saharan Africa: Security Implications of AI-Altered Information Environments, Interpeace, February 2024, pp. 13–22, available at: www.interpeace.org/wp-content/uploads/2024/02/disinformation_peacebuilding_subsaharan_africa.pdf (detailing how disinformation undermines peace efforts and negotiations); Judit Bayer et al., Disinformation and Propaganda – Impact on the Functioning of the Rule of Law in the EU and its Member States, Directorate General for Internal Policies of the European Union, 2019, pp. 63–64, available at: www.europarl.europa.eu/RegData/etudes/STUD/2019/608864/IPOL_STU(2019)608864_EN.pdf (describing how the spread of disinformation and propaganda may promote societal responses that are driven by heightened emotions like fear or anger, which may in turn undermine respect for the rule of law).

165 See Amnesty International, above note 157, p. 15 (recommending that States ensure access to reliable information by, among other steps, promoting media, information and digital literacy in order to equip people with “critical thinking tools” that can help distinguish unverifiable information).

Figure 0

Figure 1. Misconstrued relationship between information on social media and harmful acts in conflict settings.

Figure 1

Figure 2. Information properly situated within broader contexts.

Figure 2

Figure 3. Information influencing harm without a harmful act.

Figure 3

Table 1. Harmful acts in each category potentially influenced by harmful information on social media

Figure 4

Table 2. Physical harms potentially influenced by harmful information on social media

Figure 5

Table 3. Economic or financial harms potentially influenced by harmful information on social media

Figure 6

Table 4. Psychological harms potentially influenced by harmful information on social media

Figure 7

Table 5. Social or cultural harms potentially influenced by harmful information on social media

Figure 8

Table 6. Society-wide harms potentially influenced by harmful information on social media