Hostname: page-component-68c7f8b79f-gx2m9 Total loading time: 0 Render date: 2026-01-01T08:33:59.640Z Has data issue: false hasContentIssue false

On the way to deep fake democracy? Deep fakes in election campaigns in 2023

Published online by Cambridge University Press:  01 January 2026

Mateusz Łabuz*
Affiliation:
Chemnitz University of Technology, Chemnitz, Germany University of the National Education Commission, Kraków, Poland The Pontifical University of John Paul II, Kraków, Poland
Christopher Nehring*
Affiliation:
Faculty of Journalism and Mass Communication, Sofia University St. Kliment Ohridski, Sofia, Bulgaria

Abstract

The development of generative artificial intelligence raises justified concerns about the possibility of undermining trust in democratic processes, especially elections. Deep fakes are often considered one of the particularly dangerous forms of media manipulation. Subsequent research confirms that they contribute to strengthening the sense of uncertainty among citizens and negatively affect the information environment. The aim of this study is to analyse the use of deep fakes in 11 countries in 2023, in the context of elections and to indicate potential consequences for future electoral processes, in particular with regard to the significant number of elections in 2024. We argue that a so-called “information apocalypse” emerges mainly from exaggeratedly alarmist voices that make it difficult to shape responsible narratives and may have the features of a self-fulfilling prophecy. Thus, we suggest to use the term “pollution” instead and improve scientific and journalistic discourse that might be a precondition for reducing threats that touch on social reactions to deep fakes and their potential.

Information

Type
Research
Creative Commons
Creative Common License - CCCreative Common License - BY
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Copyright
Copyright © 2024 The Author(s)

Introduction

The rapid development of generative artificial intelligence (AI) and its use for political purposes raises justified concerns about the outcome of sophisticated forms of manipulation that might undermine democratic processes, particularly elections (Muñoz Reference Muñoz2023).

Researchers regularly consider future scenarios in which AI plays a major role, expressing the “fear of what could happen” (Wahl-Jorgensen & Carlson Reference Wahl-Jorgensen and Carlson2021). It draws the attention of policymakers and public opinion to specific threats, helps to imagine what challenges the society will face, but it might also lead to excessive demonization of AI and its incarnations (Yadlin-Segal & Oppenheim Reference Yadlin-Segal and Oppenheim2021).

Deep fakes can be considered a good example of this phenomenon. They can be defined as AI-generated or AI-manipulated audio, image or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (European Commission 2024). They have found numerous applications in various fields, which may vary from being extremely useful to morally questionable, dangerously harmful or directly illegal (Farid & Schindler Reference Farid and Schindler2020; Pawelec Reference Pawelec2022). This diversity of applications of deep fakes does not allow to qualify them as a bad technology per se (de Ruiter Reference de Ruiter2021), but specific threats stemming from their misuse should not be underestimated, as they have already been effectively employed to discredit and ridicule political opponents, fuel social conflicts, spread hate speech, strengthen gender inequalities or spread disinformation (Chesney & Citron Reference Chesney and Citron2019; Kleemann 2023). The number of malicious applications has led some researchers to develop a dystopian vision of an information or epistemic “apocalypse” that threatens the information environment and society (Schick Reference Schick2020).

One of the key threats analysed in the context of the multiplication of deep fakes in the information space is the possibility of influencing electoral processes, both in the form of enabling the promotion or discrediting of specific candidates, and undermining trust in the elections as such (Chesney & Citron Reference Chesney and Citron2019; Langa Reference Langa2021). The recent years have seen a massive increase in the quality of deep fake technology as well as its “democratization”, that is, its availability for an unlimited audience for little costs. Nowadays almost anyone “can fabricate fake videos that are practically indistinguishable from authentic media” (Westerlund Reference Westerlund2019, p. 39) by using easily accessible, downloadable software that allows for far-reaching interference in the audiovisual content.

Gains in the field of generative AI naturally strengthen the resources available to state and non-state actors, lowering the cost and time needed for creation and propagation of digital disinformation (Maham & Küspert Reference Maham and Küspert2023). Elections are one of the potential targets of malicious activities, which was continuously recognized by many researchers. This study aims to analyse relevant cases of using deep fakes in the context of elections that were reported in 2023. In doing so, the study focuses on three overarching questions:

  1. 1. How were deep fakes used in 2023 in the context of elections?

  2. 2. Did deep fakes significantly influence the outcome of a particular election?

  3. 3. To what extent does the “information apocalypse” narrative reflect the direct impact of deep fakes on election results?

These questions are particularly important for the global super election year 2024, which, due to the number of citizens going to vote, may be symbolically called the “Year of Democracy” (Global Coalition for Tech Justice 2023).

Methodology and limitations of the study

The aim of this study is to verify to what extent dystopian expectations about an “information apocalypse” (Schick Reference Schick2020) fuelled by deep fakes have already materialized exclusively in the context of elections. We decided to limit the scope of the study to elections due to the fact that the main part of the public discourse on deep fakes is dominated specifically by concerns about the elections. The study analyses a non-representative sample of case studies of deep fakes that were reported in eleven countries. These countries (USA, Turkiye, Argentina, Poland, UK, France, India, Bulgaria, Taiwan, Indonesia and Slovakia) were chosen either because they saw an election in 2023 or have ongoing election campaigns.

The selection of the 11 countries analysed below is based on the major (general, parliamentary/legislative or presidential) election calendars for 2023 and 2024. Data for election calendars were provided by The Association of World Election Bodies (2023). We identified 85 different relevant elections that met the criteria. In December 2023 we conducted a Google search tools query for generic phrases: “deepfake AND election” and “deep fake AND election”, as well as specific phrases: “deepfake AND {country}” and “deep fake AND {country}”. Reports related to corresponding electoral processes were identified manually and an enhanced search for positive matches was conducted afterwards. The initial search was limited to English and complemented with local media reports, where possible. Two cases (France, UK) were recognized additionally in a way unrelated to the basic query. We were able to reach most of the deep fakes identified, which allowed us to estimate their visibility based on social media entries or media reports. Such data have significant limitations, because they only take into account the main sources of dissemination of AI-generated content.

We excluded all cases of deep fakes if they did not appear in the direct context of a particular election. In 2023 alone, deep fakes were used for political purposes in many countries—Estonia, Germany, Israel, Japan, Serbia, Sudan among others—and they may also subsequently influence voters’ behaviour as well as contribute to the phenomena of undermining trust in information and the media as described in this study. However, none of these cases indicated a direct link to the election processes. We focus on the holistic deep fakes landscape, not ignoring satirical forms or political advertising, which also contribute to negative phenomena mentioned above.

The limitation of the study might be the reliance on media reports in English, which means that cases reported locally might have been omitted and gone undetected. Researching media reports is justified in a view of the “information apocalypse” narrative that stems from journalistic discourse. Our goal is not to cover all deep fakes, as some of them are also shared in limited, often closed circles and do not get media coverage. However, it should be taken into account that they also shape trust in information and the media and may have a cumulative negative impact on the epistemic value of audio and visual content. The likelihood of using deep fakes increases as the time remaining until the election decreases, which means that in the case of the 2024 elections, many deep fakes may be yet to come.

A significant limitation of considerations is the lack of an appropriate methodological framework for measuring the impact of deep fakes on the election results. Deep fakes are part of the disinformation toolset and it seems impossible to precisely determine the limits of their influence. Therefore, we focus on identifying a direct and decisive impact to recognize any correlation.

Narrative on “information apocalypse”

Since 2018, scientists, philosophers and journalists (Schwartz 2018; Rini Reference Rini2020; Schick Reference Schick2020; Toews 2020; Fallis Reference Fallis2021) have been fuelling fears that deep fakes could lead to a so-called “information apocalypse”, which was linked to gradual erosion of public trust in information and the media. These processes were supposed to completely blur the boundaries between what is true and false. Rini (Reference Rini2020) coined the notion of “epistemic backstop” to describe the reduced epistemic value of visual recordings that historically were associated with credible media, even if they could become subject to falsification or alteration (Galston 2020; Fallis Reference Fallis2021; Geddes Reference Geddes2021).

The detailed considerations on the erosion of trust are necessary to understand the effects of deep fakes on our cognitive processes and information space, whereas the use of terms “apocalypse”, “infocalypse” or questioning the epistemic value of recordings as such seem at least problematic. What exactly is an apocalypse? In classical philosophical and literary terms, it refers to destruction, the end of the world or a global catastrophe. The gradual secularization of the term has popularized its use in non-religious contexts (CenSAMM 2021).

Apocalyptic terms have already been used to describe the negative consequences of disinformation and fake news (Stover Reference Stover2018; Eder Reference Eder2019), what might be called “post-truth narrative” (Habgood-Coote Reference Habgood-Coote2023). They have transposed to the discourse about deep fakes, shaping the doomsdays scenarios (Broinowski Reference Broinowski2022). “Infodemic” (“pandemic of information disorders”) was also commonly used after the COVID-19 pandemic that resulted in a wave of disinformation and fake news (Lim Reference Lim2023). Habgood-Coote (Reference Habgood-Coote2023) refers to “epistemic apocalypse”, but he critically assesses apocalyptical predictions, whereas Horvitz (2022) expects that future generations might find themselves in a “post-epistemic world”. These phenomena might also be associated with “reality apathy” that results from constant exposure to misinformation (Schwartz 2018), or questioning the normative claim that seeing means believing (Galston 2020). The latter was intended to express declining trust in information and its carriers and heralded the end of ocularcentrism (Geddes Reference Geddes2021). In contrast, Immerwahr (2023), using Habgood-Coote’s (Reference Habgood-Coote2023) arguments on social verification, argues that we rarely purely rely on our eyes and reasoning stills plays a major role in distinguishing real from fake. This argument may work in the case of content, which is objectively easy to distinguish, but in the case of hyper-realistic audio or visual deep fakes it may not be valid.

A study published in October 2023 (Twomey et al. Reference Twomey, Ching, Aylett, Quayle, Linehan and Murphy2023, p. 17) suggests that deep fake videos indeed undermine epistemic trust, but the media coverage might “be disproportionate to the threat we are currently facing” and this response might be “creating more distrust and contributing to an epistemic crisis”. Alarmist voices draw attention to certain dangerous phenomena, but they may also foster false belief that an “information apocalypse” is already occurring (Habgood-Coote Reference Habgood-Coote2023), thus contributing to the increasingly negative psychological effects of deep fake technology. In doing so, exaggerated fears about possible effects of deep fake technology in the context of elections might themselves contribute to an “over-pollution” of public discourse on AI and deep fakes.

Our rejection of the doomsday approach is not merely semantic. Shaping appropriate narratives and improving journalistic discourse are not widely recognized as potential countermeasures against deep fakes (Horvitz 2022; Simon et al. Reference Simon, Altay and Mercier2023). A recent study proved this by pointing to the “relatively narrow conceptualization and understanding of deepfakes and their impact on society at large in journalistic discourse” (Weikmann & Lecheler Reference Weikmann and Lecheler2023, p. 13). Excessive concerns may be the result of the speculative nature of predictions and the natural fear of new technologies, which in the past was expressed in the claim that lightbulbs may cause blindness (Murphy et al. Reference Murphy, Ching, Twomey and Linehan2023). Changing the discourse and negating the alarmist–apocalyptic–doomsday narrative is one of the potential steps to slowing down the process of undermining trust in information and the media and preventing an information/epistemic apocalypse.

The implications of the apocalyptic narrative are visible in the public sphere. On the one hand, they shape the discourse, on the other hand, they fuel fear of modern technologies, often leading to their premature demonization (Yadlin-Segal & Oppenheim Reference Yadlin-Segal and Oppenheim2021). It was already proved that regularly repeated warnings contribute to a decrease in trust, excessive scepticism and more frequent marking of real materials as fake (Vaccari & Chadwick Reference Vaccari and Chadwick2020; Twomey et al. Reference Twomey, Ching, Aylett, Quayle, Linehan and Murphy2023). This, in turn, may result in a self-fulfilling prophecy—poorly balanced messages suggesting a complete erosion of the epistemic value of information or their carriers may contribute to erroneous questioning of the veracity of information and recordings.

British Prime Minister Rishi Sunak, who himself fell victim to a discrediting deep fake in 2023, indicated that deep fakes “pollute the public information ecosystem” (Gye 2023). These assessments seem to be accurate in the face of the noticeable correlation between the increasing number of deep fakes and the low level of public trust in information and the media (Luminate 2023; Home Security Heroes 2023). In our opinion, giving up the apocalyptic narrative does not have to result in ignoring the problem. We are facing a huge challenge of the reduced epistemic value of recordings that can be empirically measured. In that context, the first signs of the described scenarios are visible. We do not deny that in the worst-case scenario, information/epistemic apocalypse may occur. We decided to check it directly in the context of elections. The currently observed trends evidenced in this study indicate mostly the occurrence of individual cases (with the potential to grow) rather than a mass phenomenon. Fallis (Reference Fallis2021, p. 625) argues that “as deepfakes become more prevalent, it may be epistemically irresponsible to simply believe that what is depicted in a video actually occurred”. A significant weakness of the apocalyptic narrative is the inability to set a clear boundary of information apocalypse and, consequently, difficulties in empirical verification of the thesis.

We argue that a more appropriate term to describe the ongoing process is “pollution of the information environment”. In our opinion, currently it is epistemically irresponsible to simply not believe that what is depicted in a video actually occurred. Only in 11 cases of elections or election campaigns in 2023 were we able to determine the appearance of deep fakes that received media coverage, and only in 2 cases we assume some non-key impact on the results of electoral processes. We have not recorded a significant erosion of democratic elections caused by the occurrence of reported deep fakes, although some phenomena should be a source of justified concern.

Deep fakes in the context of elections—query of selected cases

United States of America

Despite concerns about the possibility of manipulation before the 2020 US presidential election, deep fakes did not play a significant role during the campaign (Meneses 2021). At this moment there are clear concerns about the course of the presidential election in 2024 (Klein 2023; Ulmer & Tong 2023). The whole list of minor incidents was recognized in 2023 alone. However, none of them had the potential to become a game-changer or confirms an already existing “information apocalypse”.

After Joe Biden announced his readiness to run for office, Republicans produced an AI-generated video presenting the disastrous consequences of Biden’s second term (Johnson 2023). Biden is regularly the target of attacks aimed at portraying him as unable to hold office. Modifications of his speeches circulating on social media are mixed with parodic performances, including singing the Baby Shark theme (Klein 2023). Biden was also portrayed dressed as trans celebrity Dylan Mulvaney. The video was mostly satirical in nature, but it gained a significant audience reaching up to several million recipients. Such happenings are not without significance, as they can subsequently influence voters’ behaviour and strengthen cognitive bias (Immerwahr 2023).

The US Vice President Kamala Harris was the victim of a remake, in which the original audio of her speech was replaced with disparaging material. Her voice was rambling, making the false impression that she might have been intoxicated (Farid 2023). The deep fake video portrayed famous actor Morgan Freeman, who allegedly criticized Biden, calling him “a fool”. The falsified footage was seen by thousands of X users (Reuters Fact Check 2023). One case may seem absurd, but it has amassed a sizable audience of nearly 90,000 followers on Twitch. The debate between two live-generated deep fakes imitating Biden and Trump is streamed round the clock (Farid 2023; TrumpOrBiden2024, 2023).

Donald Trump was portrayed while hugging one of his bitter rivals. Voice cloning was used to dub Trump’s controversial entries in social media (Isenstadt 2023) and the content generated at least 60,000 views on YouTube. He was also depicted while allegedly dancing with a 13-year-old girl (Marcelo 2023). The deep fake video depicting CNN journalist Anderson Cooper was meant to mock the CNN’s real reaction to Trump’s town hall and was shared among pro-Trump circles in social media, including Trump himself (Mastrangelo 2023). His entry in Truth Social was shared more than 5,000 times. Trump’s supporters disseminated a deep fake video of Hilary Clinton, in which she allegedly suggested that Democrats could control Ron DeSantis (Gorman 2023). This deep fake video was seen by almost 900,000 viewers, but it was labelled as AI-generated by platform X.

The cases described above were mainly aimed at harming rival candidates or were parodic in nature. There is also a second pillar of the use of deep fakes for election campaign purposes. Francis Suarez, the Republican mayor of Miami, used his own deep fake avatar in the pre-campaign for the 2024 US presidential election. The quality was poor, but it was aimed at allowing communication with voters. Suarez eventually withdrew from running for the Republican Party nomination (Economist 2023).

In our opinion, at this point, none of the analysed cases, even despite amassing a large audience, had the potential to have a significant impact on the election result. What should attract attention is the growing number of deep fakes that are deliberately shared by leading political actors and contribute to the pollution of the information space. In this case, a significant number of deep fakes may result in further disruption of the epistemic value of the media and is slowly heading towards doomsday scenarios.

Turkiye

The presidential election in Turkiye in 2023 was a bitter fight between the incumbent president Tayyip Erdogan and the opposition. In May, opposition leader Kemal Kilicdaroglu accused Russia of attempts to manipulate public opinion by using AI-generated content (Dallison 2023). Earlier, the third main candidate Muharrem İnce decided to withdraw from the race in response to deep porn content that depicted him and circulated on social media (Michaelson 2023). Although İnce’s popular support was estimated at 5% and he was not among top candidates, forcing a candidate to withdraw has a completely different qualitative dimension, especially since the fake content was revealed at the very end of the campaign. However, this was not the result of an accumulation of synthetic content, but a personalized attack on a specific person.

Additionally, Erdogan’s staff used an edited clip in which his main opponent, Kilicdaroglu, was supposed to perform with a representative of the Kurdistan Workers’ Party, recognized as a terrorist organization (Sparrow & Ünker 2023). Although it was labelled as deep fake by the media, two different clips presenting the leader of a terrorist organization and Kilicdaroglu were probably edited and merged. The technical nature of this content is unclear, but it is very likely that it imitated a deep fake disinformation pattern.

Argentina

In October 2023, presidential elections took place in Argentina. The campaign period was marked by the extensive use of deep fake technology to discredit political opponents or for self-promotion (political advertising). All these phenomena have prompted “The New York Times” commentator to call it “the first AI election” (Nicas 2023).

Losing candidate Sergio Massa’s campaign staff used deep fake technology on a large scale to generate his election posters. Some of them were stylized as classic Soviet propaganda posters. Additionally, Massa’s staff produced dozens of images using pop culture references and memes, including movie posters that incorporated the image of Massa depicted as a strong, fearless leader (Nicas 2023).

AI technology was also used to produce discrediting materials. For example, a deep fake in which Massa’s opponent, Javier Milei, allegedly explained how the business of selling human organs might work was marked as AI-generated, but it was clearly intended to cause harm and to lower the level of trust in Milei (Käss Reference Käss2023). Massa’s staff generated footage of other important political actors in Argentina, presenting Milei as a zombie or a madman. In response, Milei shared AI-generated images, presenting Massa as a communist leader. The campaigns conducted by both politicians gained enormous popularity. The images generated by Milei supposedly reached up to 30 million viewers (Nicas 2023) and single images were regularly viewed or shared by thousands of recipients.

The actions of election teams have apparently encouraged supporters of both politicians to experiment with deep fakes. Again, satirical contexts and artistic associations were mainly used, but the sheer number of fake images circulating on social media effectively undermined trust in information. Some real recordings were labelled as fakes by the politicians’ supporters (Nicas 2023).

None of the deep fakes used were groundbreaking enough to completely change the outcome of the election, but one should notice the multidimensional consequences that the use of deep fakes on a mass scale might have on the functioning of society and the information space. Particularly disturbing seems to be the new dynamic of public debate in Argentina, in which both sides responded with AI-generated content, which led to a kind of arms race, as well as the acceptance of the use of this type of strategy, which was reflected in the behaviour of citizens.

Of all the examples we have discussed, Argentina has come closest to the use of deep fakes for electoral purposes on a mass scale. However, it should be noted that the vast majority of AI-generated content did not imitate reality, and the parodic nature allowed for relatively easy recognition of the materials.

Poland

In August 2023 the main opposition party in Poland dubbed (voice cloning) emails leaked from the government mailbox with the voice of Polish Prime Minister Mateusz Morawiecki. Deep fakes were disseminated on social media. They also featured the controversial CEO of the largest Polish state-owned energy company PKN Orlen.

The entries collected several dozen thousand interactions of various types, but they did not enjoy much recognition in Poland, mainly due to the critical assessment from the media and non-sensational nature of the deep fakes. They were widely considered to be a “new stage of political struggle” and an attempt to test a non-regulated grey zone (Breczko 2023).

In response to the videos shared by the opposition, a member of the government coalition created a rather amateurish deep fake that met with almost no response. He used the cloned voice of Morawiecki’s main rival, Donald Tusk, who allegedly admitted he was a fraud.

United Kingdom

In October 2023, a recording was published on platform X allegedly presenting the voice of the leader of the opposition Labor Party, Keir Starmer, who was swearing and attacking his staff in an obscene manner. The publication was synchronized with the party conference in Liverpool, “probably their last before the UK holds a general election” (Meaker 2023a) that might take place in 2024. The shocking recording was quickly debunked by Labor representatives as a “deep fake” but still attracted a significant audience “approaching nearly 1.5 million hits” (Bristow 2023).

This particular deep fake was clearly aimed at undermining trust in Starmer as a potential candidate for prime minister, but it was quickly debunked, long before the election, which reduces its manipulative potential.

France

An interesting minor case was recorded in September 2023 in France, as one of the candidates to Senate elections, Juliette de Causans, asked AI to beautify her election poster. AI actually generated a heavily modified image, which was met with criticism (Styllis 2023). As de Causans was not among the top candidates, her chances to influence the election results with the retouched photo were relatively low, but this application of deep fake technology has created an interesting precedent that may become an alternative to traditional methods of manual beautification aimed at influencing voters.

India

In April 2023 a representative of the ruling BJP party released an audio recording presenting one of the opposition leaders, Palanivel Thiagarajan, who had allegedly accused his own party of illegal financial transactions. Thiagarajan denied the accusations, suggesting the possibility of deep fake manipulation. However, the later analysis of the audio track was inconclusive (Christopher 2023b).

Very likely there is no significant impact of the attempt at discrediting Thiagarajan on future elections, which is related to the relatively long time until they are next held. It seems that Thiagarajan actually fell victim to a deep fake, but the fact that the authenticity of the recordings might be questioned by reference to deep fakes is worth noting. This phenomenon has been widely described by researchers as a potential threat to the integrity and credibility of information and the media. Researchers coined the term “liar’s dividend” to describe the leverage on the part of people denying the veracity of the materials by calling them fakes (Chesney & Citron Reference Chesney and Citron2019).

An original form of political promotion are deep fakes presenting the Prime Minister of India, Narendra Modi, singing popular songs. Each time they gain several million views and collect positive reviews on social media. Although the fake nature of the content is clear, it helps to warm Modi’s image and may evoke positive associations with the politician. Moreover, recordings of Modi’s speeches are also generated in Indian languages other than Hindi, which allows him to reach communities that are usually excluded from political debate, which should be seen as a specific form of political advertising (Christopher 2023a).

Bulgaria

Bulgaria is known to be a hotspot for Russian influence as well as foreign and domestic disinformation (Nehring & Sittig Reference Nehring and Sittig2023), but it has seen surprisingly little incidents of deep fake disinformation before the parliamentary election in April 2023. As in other countries, some deep fake videos have featured well-known news anchors reading entirely fabricated news (Ignatow 2023). The most probable motive behind these deep fakes were shadowy business interests and smear campaigns without direct political implications.

Yet, a couple of weeks prior to the regional elections in October 2023 a deep fake video of prime minister Nikolai Denkov circulated on social media and made its way into all major news outlets (BNT 2023). Denkov allegedly addressed the entire nation explaining a rather odd investment scheme including the Russian oil company Lukoil. The video was quickly debunked by all major Bulgarian media, that hinted at one spelling mistake in the audio track of that video that resembled a Russian, not a Bulgarian intonation.

The biggest political implications of this deep fake might have been psychological in nature as it spread fear about Russian election interference and discredited the image of Denkov. Yet, comparing pre-election polls and the election outcome, if this video has had any consequences for the elections, the only possible outcome could have been a discouraging effect on election activity due to a disenchantment with politics and politicians in general in Bulgarian society.

Taiwan

The 2024 presidential election in Taiwan has already been targeted by deep fakes. In August 2023, an audio-deep fake of Ko Wen-je, the nominee of the Taiwan People’s Party and former mayor of Taipei, surfaced on social media and was mailed to several media agencies (Maiberg 2023b). Ko allegedly criticized his opponent’s visit to the USA, called him pompous, and said that his supporters were paid up to $800. Ko and his party quickly debunked the claims and the Investigation Bureau, confirmed the fake nature of the recording.

While the entire outlook, instruments and tactics of this deep fake-attack resemble classic “active measures” of influence and interference operations and thus suggest a professional campaign, there was no immediate visible effect on the election process. Since the content of this deep fake was barely sensational enough to decisively influence the outcome of the election, this deep fake was most probably not meant to swing the entire election, but should rather be seen as one small piece of a larger mosaic of disinformation.

Indonesia

In Indonesia the presidential election will be held in 2024. The first signals of the use of deep fakes were recorded in 2022. A deep fake video depicting potential candidate Anies Baswedan, who was allegedly supporting a person “accused of embezzling charity money”, was aimed at discrediting him (Harish 2023). Yet, so far, the nature of these deep fakes has not directly threatened the outcome of election.

In April, the voice of President Joko “Jokowi” Widodo was used to create a cover of the popular song “Asmalibrasi”, which went viral on social media, racking up over 5 million views and 10,000 retweets on Twitter, as well as over 188,000 likes on TikTok. In October, a video in which Jokowi spoke Chinese gained a significant number of views estimated at more than 2 million on TikTok alone (Harish 2023). While the cover of the song can be treated as a form of warming Jokowi’s image, the synthesized speech has a deeper political dimension. On the one hand, it can be used to promote the politician’s linguistic skills, and on the other, to create an unclear connection between Jokowi and China, which, in the face of anti-Chinese sentiment in Indonesia, already has a clear political dimension. However, in both cases no immediate political effect threatening the 2024 elections was observed.

Slovakia

Of all the elections in 2023, Slovakia probably saw the most challenging deep fake disinformation attempt. In September, two days before the election and during a traditional 48 h of moratorium on political campaigning and reporting, an audio deep fake appeared on social media. The candidate of the liberal Progressive Slovakia party, Michal Šimečka, and journalist Monika Tódová from the newspaper “Denník N” allegedly discussed a scheme to rig the election by buying votes from the country’s marginalized Roma minority (Meaker 2023b). Particularly tricky about this disinformation attack was that it used only an audio file and that it was spread so close to the election date, thus making it harder to be effectively debunked.

While the tactics and specifics of this deep fake disinformation attack made it particularly challenging, it might also have been of relevance to the outcome of the election. Pre-election polls saw a tight race between Šimečka’s Progressives and the SMER party and most polls predicted SMER to win the election (SMER in fact won by 5%). In this tight race, it was very hard to effectively measure the effect of this one deep fake, yet the content of the audio suggested that it was meant to demobilize Šimečka’s voters and encourage far-right and populist voters. In this case of a highly contested, polarized and tight election and little to no time to react, deep fake disinformation had the potential to make a significant difference.

Consequences for the election processes

According to estimations, the number of deep fakes posted online has tripled in 2023 compared to 2022, while the number of audio deep fakes is eight times higher (Ulmer & Tong 2023). This shows a certain trend, but it is mainly quantitative in nature and does not refer directly to the power of deep fakes to influence election processes. Even the increase in individual cases does not necessarily translate into a decisive impact on voting behaviour. This impact is extremely difficult, if possible at all, to be measured. Deep fakes are part of the disinformation landscape and should be analysed as a contributing factor. The threats and risks of deep fakes for elections should also be considered in the context of undermining trust in the election process as well as in information, truth, facts, authenticity and the media. This is particularly important in view of the year 2024. According to the Integrity Institute (2023), the major elections in 2024 will directly affect 3.65 billion people worldwide (Harbath & Khizanishvili 2023).

The USA and Argentina seem unique due to the extensive use of deep fakes, as evidenced by the number of reported cases, but not necessarily due to their quality or persuasive potential. In our opinion, a breakthrough should primarily be seen in the mere fact of using specific technology in the context of elections and not in its direct consequences. Some countries have had their “first deep fake moments” (Bristow 2023) in 2023, whereas others have become the stage for tests that may lead to the widespread use of this technology for electoral purposes in the future. However, this absolutely does not allow us to indicate that the information environment has been flooded with deep fakes on a global scale or to draw conclusions that an “information apocalypse” is already taking place. This does not mean, however, that the epistemic value of audio and visual materials remains the same.

Although we believe that 2023 has not brought any “breakthrough-deep fakes” so far, it is worth noting cases in which deep fakes could have an impact on the election results. Two cases described above deserve special attention.

In Turkiye one of the candidates was practically forced to withdraw from the presidential race and faced severe reputational consequences. It could contribute to the new distribution of votes. Using deep porn against Muharrem İnce was adapted to local conditions and the candidate’s life situation. His withdrawal was a clear confirmation of the effectiveness of a “deep porn strategy”.

The consequences of the campaign implemented in Slovakia are difficult to estimate. Analysis of the poll results shows slight fluctuations in favour of SMER, while the attacked Progresívne Slovensko did not record any significant declines in the last days of the election campaign. Deviations of 1 percentage point might be treated as an element of statistical error (Politico 2023), but they can also be seen as decisive points allowing for the formation of a government coalition due to the tight race between the two leading parties.

In both cases, the strategies implemented by the attackers are of key importance. The attacks were executed in the last days of the election campaigns, which shortened the reaction time and partly prevented necessary debunking. Therefore, we can point to the use of the classic “decisional checkpoints” defined as the short time preceding the election when “irrevocable decisions are made, and during which the circulation of false information therefore may have irremediable effects” (Chesney & Citron Reference Chesney and Citron2019). The campaigns in Turkiye and Slovakia might possibly pave the way for future uses of this strategy.

According to a poll conducted in August 2023, “more than 70% of citizens in the UK and Germany who did understand AI and deepfake technology say they are concerned about the threat such technology poses to elections” (Luminate 2023). The Home Security Heroes (2023) conducted a similar survey in the USA in the context of the presidential election in 2024. As much as 77% of respondents have encountered deep fake content related to political candidates and, unsurprisingly, 74.7% of the participants expressed their concern about the potential impact of deep fakes on the upcoming election.

It is safe to conclude that there is a huge potential for psychological influence in deep fake technology. In fact, voters’ perceptions and fears about deep fakes might lead to situations where malign actors do not necessarily need to use a deep fake, but simply invoke the possibility that a certain piece of information might be a deep fake. There are several examples where such claims have already influenced politics or social processes. In late 2018, the military launched a coup d’état in Gabon because rumours spread that a video message of then-president Ali Bongo was actually a deep fake and the president himself was dead (Delcker 2019).

All surveys mentioned above clearly support the claim that notwithstanding the actual impact deep fakes have had on elections so far, voters and politicians alike perceive them as a threat. Over 40% of respondents of a study in the USA indicated a sense of scepticism or a sense of being misled or misinformed, which translates into more frequent questioning of the authenticity of displayed materials and an active search for confirmation of their veracity (Home Security Heroes 2023). A recent study (Ahmed Reference Ahmed2023) suggests that exposure to deep fakes is correlated to social media news scepticism. This partly fits into the “apocalyptic” narrative, but the scale of this phenomenon is still limited, as we do not see the erosion of democratic processes or complete questioning of media authenticity. In this sense, ocularcentrism still seems to be prevalent.

Nevertheless, the sheer number of deep fakes makes it more difficult to separate truth from fake, which creates new challenges for recipients. CBS alone rejected around 900 videos that were allegedly presenting events in the Gaza Strip in autumn 2023, which forced the media company to announce an increase in manpower to counteract “the deep fake pandemic” (Lebovic 2023). Again, this fits into the dystopian narrative of an “information apocalypse”, but the media countermeasures constituted an effective barrier, although they require additional expenditure and resources.

This does not mean that the epistemic value of information remains the same. The aforementioned increase in the number of deep fakes gradually increases the uncertainty among recipients, which may be particularly important in critical moments. Numerous images that appeared in the media and presented real recordings of the war in the Gaza Strip were dismissed by online commentators as fakes (Bedingfield 2023). In October 2023, heavily contested online debates with millions of participants spun around an image presented by the Israeli government which depicted innocent children as victims of the terrorist attack. Online users accused the Israeli government of deep faking the image and cited deep fake detection software as evidence. Yet, the picture was genuine and the mere insinuation of the government employing deep fake technology was used as a propaganda weapon (Maiberg 2023a).

In the cases described above it is not so much about deep fakes themselves that are at the core of the problem, but rather the fear and uncertainty, whether a piece of information might be a deep fake or not. This is one of the correct indications of the information apocalypse narrative, which is already confirmed. The mere existence of high-quality deep fake technology can be used as a weapon of information warfare and propaganda. This demonstrates that the disruptive potential of deep fakes does not necessarily root in their ability to persuade audiences of their messages. Instead, perceived psychological threats, fear, uncertainty and the inability to distinguish between authentic and inauthentic content, between fact and fake, may be enough to exert influence and manipulate, especially if fears are fuelled by unreliable reports and society does not develop protective mechanisms. However, the degree of intensity of these processes allows us to distinguish them from the extreme vision of an information apocalypse without disregarding the threats and simultaneously challenging two contradictory narratives that emerged.

The first of them creates the belief that an “information apocalypse” is coming, which in our opinion is mainly rooted in journalistic discourse. The increase in the number of deep fakes reported in 2023, and the case studies described above, should obviously be seen as a warning signal, but none of the analysed campaigns heralds the collapse of the election’s integrity. Much more difficult to measure are the consequences of uncertainty about the authenticity of the media and the decline in trust in news.

The second narrative tends to underestimate the problem in contrast to doomsday scenarios (Economist 2023; Habgood,-Coote, 2023; Immerwahr 2023; Simon et al. Reference Simon, Altay and Mercier2023). Some experts doubt that the potential of deep fakes is large enough to completely change the outcome of the election that finds an outlet in statements like: “We still have not one convincing case of a deepfake making any difference whatsoever in politics.” (Economist 2023). One can only partially agree with such an opinion, as the level of impact and harmfulness might be graded. Even if there has not been a “breakthrough-deep fake” so far, we should not automatically assume that individual cases of deep fakes have not had any influence on elections and the political process. Although fears about the impact of generative AI on misinformation might be indeed partly overblown (Simon et al. Reference Simon, Altay and Mercier2023), one should not be overly optimistic in assuming that a “breakthrough-deep fake” will not occur in future. These two narratives in regard to deep fakes will interpenetrate, contributing to additional information chaos.

Another consequence of the growing number of deep fakes might be the growing interest in using deep fake technology for political advertising. It is a powerful tool, as it already allows to change many characteristic features of candidates and minimize or completely eliminate deficits of their performances. With the help of deep fake technology, candidates may appear more appealing, younger, better-looking, more energetic, can speak to many audiences at the same time, in different languages and customize their messages for each voter personally. The ongoing experiments with speech translation and personalized messages may set new standards in political advertising, the consequences of which cannot be clearly assessed in technological, legal, or ethical terms. These problems, however, are still understudied and do not show enough recognition.

Conclusions

We believe that none of the cases in which deep fakes were used in the context of elections so far has had a decisive impact on the course of the elections, which does not mean no effect at all. The increasing number of reported cases may indicate a certain trend that has further potential for growth. Although dystopian predictions of an “information apocalypse” have not (yet?) come true, there are some signs of already noticeable phenomena of an undermined trust in information, politics and the media, which strengthens the sense of uncertainty among the society and opens up new possibilities for manipulation. An excessively alarmist narrative does not contribute to understanding the impact of deep fakes. In our opinion, it is definitely more cognitively and socially responsible to use terms that better reflect the nature of the phenomenon (i.e. “pollution of the public information ecosystem”), even if “information apocalypse” may be an ultimate consequence of over-pollution.

Our analysis of a non-representative sample of deep fake election-related content in 2023 has produced a variety of interesting results:

First, the sheer number of deep fakes registered in the context of elections has increased in 2023. However, most of the elections were not seen to be employing this strategy. Nevertheless, it is very likely that hardly any future election can be completely safe from deep fakes. For the upcoming “super-election-year” in 2024 this means that the quantity of deep fakes created and disseminated in the context of the election will most probably see another increase. Some countries will face “deep fake campaigns” for the first time in 2024 and may draw upon the experience of other countries where the use of deep fakes was tested on a larger scale in 2023 (i.e. USA, Argentina) or directionally (i.e. Turkiye and Slovakia).

Second, this increase in quantity did not equal an increase of their quality to influence elections. The fear of an “information apocalypse”, but also of a massive interference in the outcome of elections has not (yet?) materialized. Only in two of the cases analysed in this study, i.e. Turkiye and Slovakia, it is safe to assume that deep fakes did have some effect on the election. But even there, deep fakes did not “swing”, “steal”, turn around or decisively influence the outcome of the election and they should not be seen as real “breakthrough-deep fakes”.

Third, despite the increase in quality of deep fake technology, there has not been a case in which this quality led to an equally qualitative attack on elections. Instead, so far it seems that it is not the quality of deep fakes that make them a dangerous weapon to influence democratic elections, but the technology itself and its perception. At the moment, it is not about the one, big and powerful deep fake the night before election day that turns the whole election around; it is about a dozen or so deep fakes of mediocre quality on minor political issues that create general distrust in parties, candidates and the election processes itself, that demobilize voters and builds a cumulative effect of distrust and disenchantment. If the apocalyptic narrative teaches something, it is the need to be particularly interested in human interactions with deep fakes and the consequences of declining trust in information and its carriers. Little by little, deep fakes attack not only individual politicians and political decisions, but the very existence of truth, authenticity and facts. In the case of elections, it means that we should also worry about election integrity and basic trust in democracy.

Fourth, our findings do not seek to suggest that deep fakes do not pose any direct threats to the outcomes of elections. Indicated cases of attacking decisional checkpoints should be analysed in detail, as they highlight the possible strategy that they still might be applied to swing the outcome of elections directly. The resilience of democratic systems will be of key significance, as an erosion of trust may increase the probability of successful, direct attacks. Therefore, apocalyptic visions also have a scientific value because they draw attention to the core of the problem, even if they do it in an exaggerated and overly alarmist way.

Fifth, next to quantity and a cumulative effect of their occurrence in the information space, one of the biggest threats posed by deep fakes is their psychological effect. Voters, politicians and journalists are already confused and uncertain about the authenticity of information, which is partly a consequence of fake news and disinformation campaigns. The mere fear about not being able to detect and distinguish deep fakes from authentic content might lead to an alteration in voters’ behaviour due to insecurity, i.e. for psychological reasons. This also suggests that research, public discourse and AI-media literacy efforts should probably shift away from a focus on deep fake technology towards the human interactions with it and response to deep fakes, whereas journalistic discourse should shape public debate in a responsible way, without dazzling with apocalyptic visions that may lead to the effect of a self-fulfilling prophecy.

Funding

Open Access funding enabled and organized by Projekt DEAL. No funding was used to conduct this research.

Declarations

Conflict of interest

We would like to declare that none of the authors has potential conflict of interest situation.

Footnotes

Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

Ahmed, S. 2023 Navigating the Maze: Deepfakes, Cognitive Ability, and Social Media News Skepticism New Media & Society 25 (5): 11081129 10.1177/14614448211019198.CrossRefGoogle Scholar
Bedingfield, W. (2023) Generative AI Is Playing a Surprising Role in Israel-Hamas Disinformation. Accessed 04 Nov 2023, https://www.wired.co.uk/article/israel-hamas-war-generative-artificial-intelligence-disinformationGoogle Scholar
BNT (2023). Deep fake Video Using the Face and Voice of the Prime Minister Spreads on Social Media. Accessed 08 Nov 2023, https://bnt.bg/news/deep-fake-video-using-the-face-and-voice-of-the-prime-minister-spreads-on-social-media-321680news.htmlGoogle Scholar
Breczko, B. (2023) Deepfaki w kampanii wyborczej. PO stworzyła głos Morawieckiego przy pomocy AI. Otwiera się nowy etap walki politycznej. Accessed 15 Nov 2023, https://wyborcza.biz/biznes/7,177150,30113336,deepfake-i-w-wyborach-platforma-obywatelska-stworzyla-glos.htmlGoogle Scholar
Bristow, T. (2023) Keir Starmer Suffers UK politics’ First Deepfake Moment. It won’t be the Last. Accessed 04 Nov 2023, https://www.politico.eu/article/uk-keir-starmer-labour-party-deepfake-ai-politics-electionsGoogle Scholar
Broinowski, A. 2022 Deepfake Nightmares, Synthetic Dreams: A Review of Dystopian and Utopian Discourses Around Deepfakes, and Why the Collapse of Reality May Not Be Imminent—Yet Journal of Asia-Pacific Pop Culture 7 (1): 109139 10.5325/jasiapacipopcult.7.1.0109.CrossRefGoogle Scholar
CenSAMM (2021) Apocalypticism. [In:] Critical Dictionary of Apocalyptic and Millenarian Movements. (Eds. J. Crossley, A. Lockhart)Google Scholar
Chesney, B. and Citron, D. 2019 Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security California Law Review 107 (18): 17531820.Google Scholar
Christopher, N. (2023a) AI Modi Started as a Joke, But it could Win Him Votes. Accessed 06 Nov 2023, https://restofworld.org/2023/ai-voice-modi-singing-politicsGoogle Scholar
Christopher, N. (2023b) An Indian politician says Scandalous Audio Clips are AI Deepfakes. We had them tested. Accessed 15 Aug 2023, https://restofworld.org/2023/indian-politician-leaked-audio-ai-deepfakeGoogle Scholar
European Commission (2024). Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending certain Union Legislative Acts COM(2021)0206—C9-0146/2021—2021/0106(COD))Google Scholar
Dallison, P. (2023) Turkish Opposition Leader Accuses Russia of Spreading Conspiracies, Deep Fakes ahead of Election. https://www.politico.eu/article/turkey-election-opposition-russian-interference-kilicdaroglu-erdoganGoogle Scholar
de Ruiter, A. 2021 The Distinct Wrong of Deepfakes Philosophy & Technology 34 (4): 13111332 10.1007/s13347-021-00459-2.CrossRefGoogle Scholar
Delcker, J. (2019) Welcome to the Age of Uncertainty. Accessed 08 Nov 2023, https://www.politico.eu/article/deepfake-videos-the-future-uncertaintyGoogle Scholar
Economist (2023). AI will Change American Elections, but not in the Obvious Way. Accessed 31 Aug 2023, https://www.economist.com/united-states/2023/08/31/ai-will-change-american-elections-but-not-in-the-obvious-wayGoogle Scholar
Eder, M.K. 2019 The Information Apocalypse West East Journal of Social Sciences 8 (3): 261269.CrossRefGoogle Scholar
Fallis, D. 2021 The Epistemic Threat of Deepfakes Philosophy & Technology 34 (4): 623643 10.1007/s13347-020-00419-2.CrossRefGoogle ScholarPubMed
Farid, H. and Schindler, H.-J. 2020 Deep Fakes. On the Threat of Deep Fakes to Democracy and Society Konrad Adenauer Stiftung: Berlin.Google Scholar
Farid, H. (2023) Deepfakes in the 2024 Presidential Election. Accessed 15 Aug 2023, https://farid.berkeley.edu/deepfakes2024electionGoogle Scholar
Galston, W. A. (2020) Is Seeing Still Believing? The deepfake Challenge to Truth in Politics. Accessed 17 Nov 2023, https://www.brookings.edu/articles/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politicsGoogle Scholar
Geddes, K.G. 2021 Ocularcentrism and Deepfakes: Should Seeing Be Believing? Fordham Intellectual Property, Media and Entertainment Law Journal 31 (4): 10421083.Google Scholar
Global Coalition for Tech Justice (2023). About the Campaign. Accessed 20 Oct 2023, https://yearofdemocracy.org/aboutGoogle Scholar
Gorman, L. (2023) We can’t let AI take over the 2024 Election. Accessed, 15 Aug 2023, https://thehill.com/opinion/technology/4129431-we-cant-let-ai-take-over-the-2024-electionGoogle Scholar
Gye, H. (2023) Deep Fakes could ‘Pollute’ Next Year’s General Election, Rishi Sunak Warned ahead of AI Summit. Accessed 03 Nov 2023, https://inews.co.uk/news/politics/ai-fuelled-disinformation-disrupt-election-next-year-officials-warn-summit-2711995Google Scholar
Habgood-Coote, J. 2023 Deepfakes and the Epistemic Apocalypse Synthese 201: 103 10.1007/s11229-023-04097-3.CrossRefGoogle Scholar
Harbath, K., and Khizanishvili A. (2023) Insights from Data: What the Numbers Tell Us About Elections and Future of Democracy. Accessed 04 Nov 2023, https://integrityinstitute.org/blog/insights-from-data.Google Scholar
Harish, F. (2023) Jokowi deepfakes? Fears Grow over AI-Generated Election Hoaxes. Accessed 05 Nov 2023, https://www.thejakartapost.com/indonesia/2023/06/07/jokowi-deepfakes-fears-grow-over-ai-generated-election-hoaxes.htmlGoogle Scholar
Home Security Heroes (2023) AI deepfakes in 2024 election. Accessed 04 Nov 2023, https://www.homesecurityheroes.com/ai-deepfakes-in-2024-electionGoogle Scholar
Horvitz, E. (2022) On the Horizon: Interactive and Compositional Deepfakes. In: Proceedings of the 2022 International Conference on Multimodal Interaction, 653–661CrossRefGoogle Scholar
Ignatow, T. (2023) Oткpaднaтa caмoличнocт c изкycтвeн интeлeкт: Фaлшивo видeo c Aдeлинa Paдeвa oбикaля coциaлнитe мpeжи. Accessed 08 Nov 2023, https://bntnews.bg/news/otkradnata-samolichnost-s-izkustven-intelekt-falshivo-video-s-adelina-radeva-obikalya-socialnite-mrezhi-1249751news.htmlGoogle Scholar
Immerwahr, D. (2023). What The Doomsayers Get Wrong About Deepfakes. Accessed 20 Nov 2023, https://www.newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-reviewGoogle Scholar
AFP Indonesia (2023) Doctored video of Indonesian President Speaking in Mandarin Circulates online. Accessed 05 Nov 2023, https://factcheck.afp.com/doc.afp.com.33Z42RTGoogle Scholar
Isenstadt, A. (2023) DeSantis PAC uses AI-Generated Trump Voice in ad Attacking Ex-president. Accessed 15 Aug 2023, https://www.politico.com/news/2023/07/17/desantis-pac-ai-generated-trump-in-ad-00106695Google Scholar
Johnson, A. (2023) Republicans Share An Apocalyptic AI-Powered Attack Ad Against Biden: Here’s How To Spot A Deepfake. Accessed 15 Aug 2023, https://www.forbes.com/sites/ariannajohnson/2023/04/25/republicans-share-an-apocalyptic-ai-powered-attack-ad-against-biden-heres-how-to-spot-a-deepfakeGoogle Scholar
Käss, S. 2023 Ein Sprung ins Ungewisse Berlin: Konrad Adenauer Stiftung.Google Scholar
Kleemann, A. (2023) Deepfakes—Wenn wir unseren Augen und Ohren nicht mehr trauen können. SWP-Aktuell 43. https://doi.org/10.18449/2023A43.CrossRefGoogle Scholar
Klein, C. (2023) “This Will Be Dangerous in Elections”: Political Media’s Next Big Challenge Is Navigating AI Deepfakes. Accessed 15 July 2023, https://www.vanityfair.com/news/2023/03/ai-2024-deepfakeGoogle Scholar
Langa, J. 2021 Deepfakes, Real Consequences: Crafting Legislation to Combat Threats Posed by Deepfakes Boston University Law Review 101: 761801.Google Scholar
Lebovic, M. (2023) These Israelis are fighting Hamas on the War’s Emerging ‘Deepfake’ Cyberfront. Accessed 04 Nov 2023, https://www.timesofisrael.com/these-israelis-are-fighting-hamas-on-the-wars-emerging-deepfake-cyberfrontGoogle Scholar
Lim, W.M. 2023 Fact or fake? The Search for Truth in an Infodemic of Disinformation, Misinformation, and Malinformation with Deepfake and Fake News Journal of Strategic Marketing 10.1080/0965254X.2023.2253805.CrossRefGoogle ScholarPubMed
Luminate (2023) Bots versus Ballots: Europeans fear AI Threat to Elections and Lack of Control Over Personal Data. Accessed 29 Oct 2023, https://www.luminategroup.com/posts/news/bots-versus-ballots-europeans-fear-ai-threat-to-elections-and-lack-of-control-over-personal-data.Google Scholar
Maham, P. and Küspert, S. 2023 Governing General Purpose AI Berlin: Stiftung Neue Verantwortung.Google Scholar
Maiberg, E. (2023a) AI Images Detectors are Being Used to Discredit the Real Horrors of War. Accessed 04 Nov 2023, https://www.404media.co/ai-images-detectors-are-being-used-to-discredit-the-real-horrors-of-war.Google Scholar
Maiberg, E. (2023b) Deepfake Audio Is Defaming a Presidential Candidate. Accessed 04 Nov 2023, https://www.404media.co/taiwan-claims-deepfake-audio-is-defaming-a-presidential-candidateGoogle Scholar
Marcelo P. (2023). Image Claiming to Show Trump Dancing with Underage Girl is Fake. Accessed 15 Aug 2023, https://apnews.com/ap-fact-check/image-claiming-to-show-trump-dancing-with-underage-girl-is-fake-00000188e8ebdc10ad9bf8eb72850000Google Scholar
Mastrangelo, D. (2023). Trump Shares Fake Video of Anderson Cooper Reacting to CNN Town Hall. Accessed 15 Aug 2023, https://thehill.com/homenews/media/4001639-trump-shares-fake-video-of-anderson-cooper-reacting-to-cnn-town-hallGoogle Scholar
Meaker, M. (2023a) Deepfake Audio Is a Political Nightmare. Accessed 04 Nov 2023, https://www.wired.com/story/deepfake-audio-keir-starmerGoogle Scholar
Meaker, M. (2023b) Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy. Accessed 04 Nov 2023, https://www.wired.co.uk/article/slovakia-election-deepfakesGoogle Scholar
Meneses, J. P. (2021) Deepfakes and the 2020 US Elections: what (did not) happen. arxiv.org/abs/2101.09092Google Scholar
Michaelson, R. (2023) Turkish Presidential Candidate Quits Race After Release of Alleged Sex Tape. Accessed 15 Aug 2023, https://www.theguardian.com/world/2023/may/11/muharrem -ince-turkish-presidential-candidate-withdraws-alleged-sex-tapeGoogle Scholar
Muñoz, K. 2023 The Transformative Role of AI in Reshaping Electoral Politics DGAP MEMO 4: 7495542.Google Scholar
Murphy, G., Ching, D., Twomey, J. and Linehan, C. 2023 Face/Off: Changing the Face of Movies with Deepfakes PLoS ONE 10.1371/journal.pone.0287503.CrossRefGoogle ScholarPubMed
Nehring, C. and Sittig, H. 2023 Disinformation in Southeast Europe Sofia: Konrad Adenauer Foundation.Google Scholar
Nicas, J. (2023) Is Argentina the First A.I. Election?. Accessed 17 Nov 2023, https://www.nytimes.com/2023/11/15/world/americas/argentina-election-ai-milei-massa.htmlGoogle Scholar
Pawelec, M. 2022 Deepfakes and Democracy (Theory): How Synthetic Audio-Visual Media for Disinformation and Hate Speech Threaten Core Democratic Functions Digital Society 1 (2): 19 10.1007/s44206-022-00010-6.10.1007/s44206-022-00010-6CrossRefGoogle ScholarPubMed
Politico (2023) Slovakia—2023 General Election. Accessed 04 Nov 2023, https://www.politico.eu/europe-poll-of-polls/slovakiaGoogle Scholar
Reuters Fact Check (2023). Fact Check-Video of Morgan Freeman criticizing Joe Biden is a deepfake. Accessed 15 Aug 2023, https://www.reuters.com/article/factcheck-idUSL1N36F1ITGoogle Scholar
Rini, R. 2020 Deepfakes and the Epistemic Backstop Philosophers’ Imprint 20 (24): 116.Google Scholar
Schick, N. 2020 Deep Fakes and the Infocalypse Ottawa: Octopus Books.Google Scholar
Schwartz, O. (2018) You Thought Fake News was Bad? Deep Fakes are Where Truth goes to Die. Accessed 17 Nov 2023, https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truthGoogle Scholar
Simon, F.M., Altay, S. and Mercier, H. 2023 Misinformation Reloaded? Fears About the Impact of Generative AI on Misinformation are Overblown Harvard Kennedy School Misinformation Review 10.37016/mr-2020-127.CrossRefGoogle Scholar
Sparrow T., Ünker P. (2023). Faktencheck: Erdogan zeigt Fake Video von Kilicdaroglu. Accessed 15 Aug 2023, https://www.dw.com/de/faktencheck-erdogan-zeigt-manipuliertes-video-von-kilicdaroglu/a-65562185Google Scholar
Stover, D. 2018 Garlin Gilchrist: Fighting Fake News and the Information Apocalypse Bulletin of the Atomic Scientists 74 (4): 283288 10.1080/00963402.2018.1486618.CrossRefGoogle Scholar
Styllis, G. (2023). French senatorial hopeful admits digitally tweaking campaign photo. Accessed 03 Nov 2023, https://www.telegraph.co.uk/world-news/2023/09/15/juliette-de-causans-french-politics-digital-edit-postersGoogle Scholar
The Association of World Election Bodies (2023) World Election Calendar. Accessed, 15 Nov 2023, http://aweb.org/eng/bbs/B0000007/list.do?menuNo=300052Google Scholar
Toews, R. (2020) Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared. Accessed 17 Nov 2023, https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-preparedGoogle Scholar
TrumpOrBiden2024 (2023). Trump or Biden 2024. Accessed 30 Oct 2023, https://www.twitch.tv/trumporbiden2024Google Scholar
Twomey, J., Ching, D., Aylett, M.P., Quayle, M., Linehan, C. and Murphy, G. 2023 Do Deepfake Videos Undermine Our Epistemic Trust? A Thematic Analysis of Tweets that Discuss Deepfakes in the Russian Invasion of Ukraine PLoS ONE 18 (10): e0291668 10.1371/journal.pone.0291668.CrossRefGoogle Scholar
Ulmer, A., and A. Tong (2023) Deepfaking it: America’s 2024 Election Collides with AI Boom. Accessed 15 July 2023, https://www.reuters.com/world/us/deepfaking-it-americas-2024-election-collides-with-ai-boom-2023-05-30Google Scholar
Vaccari, C. and Chadwick, A. 2020 Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News Social Media + Society 10.1177/2056305120903408.CrossRefGoogle Scholar
Wahl-Jorgensen, K. and Carlson, M. 2021 Conjecturing Fearful Futures: Journalistic Discourses on Deepfakes Journalism Practice 15 (6): 803820 10.1080/17512786.2021.1908838.10.1080/17512786.2021.1908838CrossRefGoogle Scholar
Weikmann, T. and Lecheler, S. 2023 Cutting Through the Hype: Understanding the Implications of Deepfakes for the Fact-Checking Actor-Network Digital Journalism 10.1080/21670811.2023.2194665.Google Scholar
Westerlund, M. 2019 The Emergence of Deepfake Technology: A Review Technology Innovation Management Review 9 (11): 3952.10.22215/timreview/1282CrossRefGoogle Scholar
Yadlin-Segal, A. and Oppenheim, Y. 2021 Whose dystopia is it anyway? Deepfakes and social media regulation Convergence: the International Journal of Research into New Media Technologies 27 (1): 3651 10.1177/1354856520923963.10.1177/1354856520923963CrossRefGoogle Scholar