Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-10T08:08:19.831Z Has data issue: false hasContentIssue false

5 - Bots and Computational Propaganda: Automation for Communication and Control

Published online by Cambridge University Press:  24 August 2020

Nathaniel Persily
Affiliation:
Stanford University, California
Joshua A. Tucker
Affiliation:
New York University

Summary

This chapter explores core arguments surrounding the political use of bots. It details the brief history of their use online. It accesses the academic literature to highlight key themes on the subject of what some researchers call computational propaganda and others simultaneously call “information operations,” “information warfare,” “influence operations,” “online astroturfing,” “cybertufing,” and many other terms. Computational propaganda, and each of these other concepts to one degree or another, focuses on the ways in which the use of algorithms, automation (most often in the form of political bots), and human curation are used over social media to purposefully distribute misleading information over social media networks.

Type
Chapter
Information
Social Media and Democracy
The State of the Field, Prospects for Reform
, pp. 89 - 110
Publisher: Cambridge University Press
Print publication year: 2020
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Introduction

Public awareness surrounding the threat of political bots, of international fears about armies of automated accounts taking over civic conversations on social media, reached a peak in the spring of 2017. On May 8 of that year, former Acting US Attorney General Sally Yates and former US Director of National Intelligence James R. Clapper Jr. sat before Congress to testify on what they called “the Russian toolbox” used in online efforts to manipulate the 2016 US election (Washington Post Staff 2017). In response to their testimony and a larger US intelligence community (IC) report on the subject Senator Sheldon Whitehouse said, “I went through the list [of tools used by the Russians] … it looked like propaganda, fake news, trolls, and bots. We can all agree from the IC report that those were in fact used in the 2016 election” (Washington Post Staff 2017).

Yates and Clapper argued that the Russian government and its commercial proxy – the Internet Research Agency (IRA) – made substantive use of bots to spread disinformation and inflame polarization during the 2016 US presidential election. These comments mirrored concurrent allegations made by other public officials, but also by academic researchers and investigative journalists, around the globe. Eight months earlier, during a speech before her country’s parliament German Chancellor Angela Merkel raised concerns that bots would affect the outcome of their upcoming election (Reference CopleyCopley 2016). Shortly thereafter, the New York Times described the rise of “a battle among political bots” on Twitter.

Around the same time, research from the University of Southern California’s Information Sciences Institute concretized the ways that social media bots were being used to manipulate public opinion:

The presence [of] social bots in online political discussion can create three tangible issues: first, influence can be redistributed across suspicious accounts that may be operated with malicious purposes; second, the political conversation can become further polarized; third, the spreading of misinformation and unverified information can be enhanced. (Reference Bessi and FerraraBessi and Ferrara 2016)

These findings were backed up by several other prominent studies that both preceded this work and have vindicated it since. Reference Metaxas and MustafarajMetaxas and Mustafaraj (2012) discussed findings in Science illuminating a similar distribution of influence across suspicious Twitter bot accounts used to defame a Massachusetts Senate candidate in 2010. Reference Kramer, Guillory and HancockKramer, Guillory, and Hancock (2015), in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), found that exposure to Twitter bots with oppositional views increased political polarization among participants in an experimental study. Reference Woolley and HowardWoolley and Howard (2018), in a book of country-specific case studies entitled Computational Propaganda, argued that bots are often used during events around the world to spread and bolster misinformative and disinformative stories online.

This chapter explores these, and other, core arguments surrounding the political use of bots. It details the brief history of their use online. It accesses the academic literature to highlight key themes on the subject of what some researchers call computational propaganda and others simultaneously call “information operations,” “information warfare,” “influence operations,” “online astroturfing,” “cybertufing,” and many other terms. Computational propaganda, and each of these other concepts to one degree or another, focuses on the ways in which the use of algorithms, automation (most often in the form of political bots), and human curation are used over social media to purposefully distribute misleading information over social media networks (Reference Woolley and HowardWoolley and Howard 2016a).

This literature review details empirical work on how the bot as an internet-based tool, and computational propaganda as a political communication strategy, function in relationship to social media and democracy. As Reference Luceri, Deb, Badawy and FerraraLuceri et al. (2019) argue, “the presence of social bots does not show any sign of decline despite the attempts from social network providers to suspend suspected, malicious accounts.” With this argument in mind, this chapter discusses the implications of the continued use of political bots and computational propaganda for social media and democracy.

The following discussion is broken into five parts: The first section explores bots in the context of their general use online and then unpacks research that examines their social use. The second looks into their political use and discusses research on how to detect such use. The third details arguments on how bots can and have been deployed over social media as tools used in the interest of democracy. The fourth outlines key arguments and research more broadly focused on computational propaganda, information operations, and the like. The fifth, and final, section illuminates gaps in the literature. It outlines ongoing and new research into how bots and computational propaganda are used over social media to effect democracy and summarizes the core ideas of this piece.

To begin, it is important to examine perspectives on exactly why bots have become a particular topic of concern for scholars who study the online world. How are they used technically? What are their social uses?

Bots

In part due to the political concerns detailed in the Introduction to this chapter, but also because of broader interest in artificial intelligence (AI) and automation, scholars and publics have begun to shine a light on the automated internet software technology known as the bot. Socially oriented versions of bots, which can be programmed to look and act like real people on sites like Facebook or Twitter, are often key tools for spreading computational propaganda. When this is the case, these programs have been referred to as digital “astroturf content” or as “political bots” (Reference Ratkiewicz, Conover and MeissRatkiewicz et al. 2011; Reference WoolleyWoolley 2016).

The word “bot” is an umbrella term that encapsulates many different kinds of automated online software programs or scripts. In fact, what counts as a bot is the topic of conjecture and debate within the technology community (Reference MartineauMartineau 2018). Reference LeonardLeonard (1998) called bots “the webs first indigenous species” and set out to discuss and historicize the wide array of automated online actors that could be considered to exist in the category of “bot.” Both bots and social bots (sometimes called chat bots), as the narrower category of front-facing, communication-enabled, online bots are often known, do have a broad array of uses online outside of the political sphere, however (Reference Wagner, Mitter, Körner and StrohmaierWagner et al. 2012). They were, and are, crucial applications for automating spam messaging over email (Reference Zhuang, Dunagan, Simon, Wang, Osipkov and TygarZhuang et al. 2008). The earliest bots were designed for network maintenance by the computer engineers who faced infrastructural challenges and a need for human coders to take on tasks requiring more critical oversight (Reference LeonardLeonard 1998). Researchers, however, were quick to see their potential as “intelligent software” that could help people better navigate, and even communicate, via the Internet (Reference Weld and EtzioniWeld and Etzioni 1995).

In their most simple iteration as online programs that run automated tasks (while not directly interacting with other web users) bots have long been infrastructural tools used for activities relating to early iterations of online indexing and internet search (Reference Middlebrook and MullerMiddlebrook and Muller 2000; Reference Seymour, Frantsvog and KumarSeymour, Frantsvog, and Kumar 2011). Both simple strings of code intended to back up or update personal computers and socially oriented, automated, imposter accounts on Twitter can be referred to as bots. These automated programs have a seriously large presence online. In fact, in 2015 the cybersecurity firm Incapsula (now known as Imperva Incapsula) found that bot usage made up around 50 percent of all online traffic (Incapsula 2015). In 2014, as many as 20 million accounts on Twitter were identified as bots (Reference MottiMotti 2014). The number of bots functioning on Facebook and other prominent platforms is less clear, in part due to firms’ close hold on user data and metrics. In 2018, however, Facebook self-reported to the US Securities and Exchange Commission (SEC) that an estimated 3–4 percent of, or around 50 million, accounts on the site were “fake” (Facebook 2017). It is clear that a significant amount of bots function online today, but it is also true that social bots have existed on the Internet for several decades.

The use of bots in online social settings dates back to before their integral use over Internet Relay Chat (IRC) – a precursor to contemporary social media (Reference MuttonMutton 2004). Social bots also appeared even earlier, in experiments with what programmers then called “chat bots” on the public web’s precursor, the Advanced Research Projects Agency Network (ARPANET) (Reference GarberGarber 2014). The automated, perpetual nature of bots, combined with modern computational power, means that bots, whether social or not, are hugely important in scaling work online (Reference LeonardLeonard 1998). Bots can achieve discrete, repetitive tasks in a fraction of the time it would take a human counterpart. Because of this, they have been integral to the endless organizational work central to maintaining sites such as Wikipedia and Reddit (Reference GeigerGeiger 2014; Reference Long, Vines and SuttonLong et al. 2017). As bot technology progresses and social media becomes more ubiquitous worldwide, these automatons continue to become more and more useful as political amplification and suppression tools online (Reference Shao, Ciampaglia, Varol, Yang, Flammini and MenczerShao et al. 2018).

Advances in machine learning allow social bots to more readily learn from their environment and to use what they find in their interactions on gaming platforms or in their conversations on social media platforms (Reference Baumgarten, Colton and MorrisBaumgarten, Colton, and Morris 2009; Reference Ferrara, Varol, Davis, Menczer and FlamminiFerrara et al. 2016). For instance, Tay – now known mostly as Microsoft’s failed Twitter chat bot experiment – was first seen as unique because it was built to learn from other users on the platform (Reference VincentVincent 2016). As Reference Suárez-Gonzalo, Mas-Manchón and Guerrero-SoléSuárez-Gonzalo, Mas-Manchón, and Guerrero-Solé (2019) point out, however, Tay was still a product of its human designers. This is an important distinction because, as they argue, people often see bots as independent actors due to the fact bots work autonomously. Yet, ultimately, the identity and agency of bots are complicated by way of their symbiotic relationship with the people who build and use them (Reference Neff and NagyNeff and Nagy 2016).

Because this usage extends to the social – where bots have real-time conversations with humans on sites like Facebook and Twitter – the engineers who build them often view them as more than a tool but less than human, a proxy for the creator (Reference Woolley, Shorey, Howard and PapacharissiWoolley, Shorey, and Howard 2018). Social bots play a key role in generating content and are often used to mimic real users on Twitter and many other social media sites and online discussion communities (Reference Kumar, Cheng, Leskovec and SubrahmanianKumar et al. 2017). Researchers have developed ways, but still face challenges, in detecting whether a given online account is a human, bot, or cyborg (Reference Chu, Gianvecchio, Wang and JajodiaChu et al. 2010; Reference Gorwa and GuilbeaultGorwa and Guilbeault 2018). Though machine learning capabilities used for social bot development are progressing, it is still true that sophisticated propagandists make use of both human and bot communication and work in order to most effectively manipulate public opinion (Reference Paavola, Helo, Jalonen, Sartonen and HuhtinenPaavola et al. 2016). Even the most sophisticated machine learning or deep learning–enabled social bots have trouble parsing human emotion, humor, and sarcasm and as such can be identified more readily than bot-human hybrids that harness human intelligence (Reference Davis, Varol, Ferrara, Flammini and MenczerDavis et al. 2016; Reference Chatterjee, Gupta, Chinnakotla, Srikanth, Galley and AgrawalChatterjee et al. 2019).

Because bots are useful in scaling communication online – and because they provide an additional layer of anonymity over social media – they have become popular tools for spreading political propaganda over social media (Reference WoolleyWoolley 2016). The next section details research on the various uses – and ramifications – of political bots. It situates the political bot as a global phenomenon, as tools now used in efforts to manipulate public opinion on numerous websites and social media platforms in a variety of languages and countries.

Political Bots

Political bots – sometimes known as fake followers, astroturf accounts or sock puppets – are automated social media accounts, often built to look and act like real people, in order to manipulate public opinion (Reference Ratkiewicz, Conover and MeissRatkiewicz et al. 2011; Reference Woolley, Howard, Robinson, Seib and FrohlichWoolley and Howard 2016b). Political bots can be used to amplify the spread of particularly partisan, or completely false, information. They have, for instance, been used by far-right groups on Twitter to spread content and by anti-vaccine activists to boost false messaging on health communication (Reference Marwick and LewisMarwick and Lewis 2017; Reference Broniatowski, Jamison and QiBroniatowski et al. 2018). They can drive up the number of likes, re-messages, or comments associated with a person or idea. Researchers have catalogued political bot use in massively bolstering the social media metrics of politicians and political candidates from Donald Trump to Rodrigo Duterte (Reference Zhang, Wells, Wang and RoheZhang et al. 2018; Reference Uyheng, Carley, Thomson, Bisgin, Dancy and HyderUyheng and Carley 2019). They can be used to harass journalists, activists, and political opposition in state-sponsored trolling campaigns (Reference Monaco and NyssMonaco and Nyss 2018). They are even used in attempts to prioritize, and subsequently harness, online views for particular traditional news sources over others (Reference Sanovich, Stukal and TuckerSanovich, Stukal, and Tucker 2018).

While events including the 2016 US election and the UK Brexit referendum may have catapulted these ideas to the forefront of the Western zeitgeist, political bots and computational propaganda are global in use and continue to play a role in international political communication at present. Bots have been used over Twitter and other applications to harass journalists and attack dissidents in Mexico, for instance, since at least 2012 (Reference OrcuttOrcutt 2012; Reference TreréTreré 2016). Automated accounts have been estimated to account for up to 50 percent of traffic among accounts tweeting about Russian politics (Reference Stukal, Sanovich, Bonneau and TuckerStukal et al. 2017). In Syria, bots have been used to spread messages in favor of Bashir al-Assad and to confuse and attack opposition (Reference Abokhodair, Yoo and McDonaldAbokhodair, Yoo, and McDonald 2015). So-called spambots have been used in conversations about Italian politics online to generate civic noise (Reference Cresci, Di Pietro, Petrocchi, Spognardi and TesconiCresci et al. 2017). Canadian researchers found that “identification, evidence, attribution, and enforcement” were among the chief problems associated with bots “disrupting” that country’s democratic process (Reference Dubois and McKelveyDubois and McKelvey 2019). During Chile’s 2017 presidential race, bots were deployed to spread Twitter messages related to numerous candidates, including a suspiciously large amount of automated traffic for the eventual third-place progressive candidate Marco Enríquez-Ominami (Reference Castillo, Allende-Cid, Palma and MeiselwitzCastillo et al. 2019).

Though country-specific cases of political bot usage are useful in studying particular ways these tools are used in particular places, the same groups of bots are often used in efforts to manipulate public opinion across borders, during different types of situations and in different languages. Studies have revealed that bot accounts used to spread political communication during one country’s election have then been used in another separate country and contest (Reference FerraraFerrara 2017). Similarly, researchers have found that the same bot accounts used in one event or crisis in the same country have then been reused in another (Reference Starbird, Maddock, Orand, Achterman and MasonStarbird et al. 2014). Still others have found bot networks that switch between multiple languages or written versions of the same language (i.e., simplified versus traditional Mandarin) and argued that this can be both a useful feature in bot detection and indicative of outside efforts to influence digital political conversation in other countries (Reference MonacoMonaco 2017; Reference Varol, Ferrara, Davis, Menczer and FlamminiVarol et al. 2017). Owing to the complexity of how networks of political bots operate – with the same collections of accounts switching focus between state borders and across multiple tongues – they are often difficult to detect and manage (Reference Morstatter, Wu, Nazer, Carley and LiuMorstatter et al. 2016). This has not, however, stopped governments and technology firms from attempting to curb their use.

There has recently been a spate of policies, both in the United States and elsewhere, attempting to deal with the malicious use of political bots on social media. Many of these policies fall short due to a lack of institutional clarity – in both technology and political circles – about what actually constitutes bot traffic and, indeed, whether all automated traffic is problematic. Researchers have argued “that multiple forms of ambiguity are responsible for much of the complexity underlying contemporary bot‐related policy” (Reference Gorwa and GuilbeaultGorwa and Guilbeault 2018). Moreover, Gorwa and Guilbeault suggest that “before successful policy interventions can be formulated, a more comprehensive understanding of bots – especially how they are defined and measured – will be needed.” Indeed, recent US policy has been criticized for taking an overly censorial, broad, and technologically unsophisticated approach to combating and regulating the political use of bots during elections and other events (Reference WestWest 2017; Reference BromwichBromwich 2018). Reference MaréchalMaréchal (2016) has argued for a normative framework for bots across social media sites in response to such ambiguity about how online platforms define automated accounts and how the public understands them. Yet the quest for a normative framework for understanding bots is challenged not least by the sheer difference in the ways that different categories of political bots are used, despite recent authorship of several articles that attempt to define and categorize social bots (Reference Grimme, Preuss, Adam and TrautmannGrimme et al. 2017; Reference Stieglitz, Brachten, Ross and JungStieglitz et al. 2017; Reference Gorwa and GuilbeaultGorwa and Guilbeault 2018)

In an early discussion of malicious bot software, Reference HolzHolz (2005) discussed “the zoo” of bot types: from those harnessed in distributed denial-of-service (DDoS) attacks to those deployed for mass identity theft. There are a similar range of varieties of political bots. Listener bots can monitor social media sites and databases for key information but also track and communicate what they find (Reference WoolleyWoolley 2016). Spambots, conversely, are built to generate noise (Reference Cresci, Di Pietro, Petrocchi, Spognardi and TesconiCresci et al. 2017). Wikiedits bots can be created to monitor politicians’ edits to Wikipedia pages, but they are also often programmed to tweet about alleged changes to Twitter in efforts to name and shame – potentially stymying governmental use of Wikipedia (Reference Ford, Dubois and PuschmannFord, Dubois, and Puschmann 2016). Sleeper bots are social media accounts that sit on a site like Twitter all but unused for years, in order to generate a more realistic online presence, and are then activated during key political events (Reference WoolleyHoward, Kollanyi, and Woolley 2016). Troll bots, built to harass, have been used to demobilize activists trying to organize and communicate on Twitter but can also be used to drive traffic from one cause, product, or idea to another (Reference Llewellyn, Cram, Hill and FaveroLlewellyn et al. 2019). Finally, honeypot bots are built to attract particular users or even other bots (Reference Lee, Caverlee and WebbLee, Caverlee, and Webb 2010).

It is clear that the collection of academic research into the rise in usage of political bots – and of broader online tactics relating to the promotion of disinformation and polarizing content – has grown since Reference Metaxas, Mustafaraj and Gayo-AvelloMetaxas, Mustafaraj, and Gayo-Avellos’s (2011) early work on suspicious political campaigns on Twitter. Despite this, social scientific research demonstrating large-scale offline sociopolitical effects related to online bot usage remains slim (Reference Tucker, Guess and BarberáTucker et al. 2018). Though many of the aforementioned researchers have demonstrated that social media bots have an active role in political communication around the world, fewer have had success in relating this automated communication directly to electoral outcomes. There has been more success, however, in drawing connections between the types of human users who spread bot content – or disinformative or polarizing content – and why.

Reference Badawy, Lerman and FerraraBadawy, Lerman, and Ferrara (2018) examine users who spread Russian content during the 2016 US election and determine “that political ideology, bot likelihood scores, and some activity-related account meta data are the most predictive features of whether a user spreads trolls’ content or not.” Reference Woolley and GuilbeaultWoolley and Guilbeault (2017) find that those in positions of power – including politicians, pundits, and journalists – often share Twitter bot–related content when it reflects their own views. The work of Reference Stella, Ferrara and De DomenicoStella, Ferrara, and De Domenico (2018), using data from the 2017 Catalan referendum, builds on this idea. They argue that, in this case, bots – despite often existing on the peripheries of social systems – were successful in exposing influential people to inflammatory and extreme views. Still other studies, however, suggest that digital disinformation, automated or otherwise, has little effect on people’s understanding of politics or that social media use has an insignificant correlation to polarization (Reference Allcott and GentzkowAllcott and Gentzkow 2017; Reference Castillo, Allende-Cid, Palma and MeiselwitzCastillo et al. 2019).

The debate about the influence of political bots, and surrounding the larger effects of computational propaganda, continues. The literature makes it clear, though, that political bots have become an important new tool for political communication online. Importantly, not all political uses of social bots are malicious or focused on control. There are a variety of examples, and a growing body of research, on the democratically positive uses of bots.

Bots for Democratic Good

Journalists, activists, commentators, and civic society groups have built chatbots aimed at openly engendering general political conversation over sites like Wikipedia and over modern social media precursors since the Net went public (Reference MuttonMutton 2004; Reference Tsvetkova, García-Gavilanes, Floridi and YasseriTsvetkova et al. 2017). Recently, there has been a rise in those producing public-facing social bots aimed at engendering conversation on pressing social issues, revealing political malfeasance and calling attention to protests (Reference Sample, Karhio, Ramada Prieto and RettbergSample 2015; Reference Følstad, Brandtzaeg, Feltwell, Law, Tscheligi and LugerFølstad et al. 2018). Bots have even been used to generate stories and to report on pending and real-time natural disasters or public health concerns (Reference Lokot and DiakopoulosLokot and Diakopoulos 2016; Reference LemelshtrichLemelshtrich 2018). Because bots are able to automatically function at a computationally enhanced rate, they are particularly useful for journalists facing the very real demands of traditional story generation in that they can facilitate connections with readers to both spread and retrieve news (Reference Gonzales and GonzálezGonzales and González 2017). Gonzalez and Gonzalez, exploring the case of a service known PolitiBot, which operated during the 2016 Spanish election over Telegram, write that the true journalistic potential of the program was to share relevant news (with more than 70 percent user satisfaction) with readers through that platform.

Hwang, Pearce, and Nanis (Reference Hwang, Pearce and Nanis2012) explore the ways that bots can be used as a social prosthesis or scaffolding for connecting networks of people that might not otherwise communicate. They argue, citing natural bot-driven experiments on social media, that bots can be effectively used to parse information on a social network, pay particular attention to what people have in common, and connect users based on these interests. They make the point that bots can be used to mitigate the burden of troublesome conversations online. It is important to note that, while the connective use of bots has certain benefits for democracy, it can also be being harnessed for control (Reference Woolley and GuilbeaultWoolley and Guilbeault 2017).

While some researchers have examined journalism bots’ capacities to search for information and communicate with readers, others have used these automated digital tools to problematize the idea that communication necessarily exists between two or more people, arguing instead that tools like bots play a key nonhuman role in news sharing online (Reference Larsson and HallvardLarsson and Hallvard 2015). Reference Lokot and DiakopoulosLokot and Diakopoulos (2016) propose a typology of “news bots” in order to guide intent, utility, and functionality of bots constructed by future designers and reporting teams. They note the limits of robot journalists – especially in the areas of automated commentary, opinion writing, algorithmic transparency, and general accountability.

The analysis of Reference Lokot and DiakopoulosLokot and Diakopoulos (2016) primarily focuses on design elements of a sample of extant news bots on Twitter. They examine the various journalistic functions of these accounts and make it clear that news bots could change the modern media environment. In particular, their exploration of journalism bot accounts is concerned with their function in generating articles and reporting. They discuss problems associated with the opacity of algorithms that drive news bots but leave room for a larger discussion about the people who construct those algorithms, what cultural values they encode into that software, and the function and of the resultant bots during political crises and elections.

Reference Lokot and DiakopoulosLokot and Diakopoulous (2016), as well as other researchers, have explored the idea that bots could feasibly replace human journalists in some instances. Indeed, tools like the LA Times’s Quake Bot can automatically generate and post stories (Reference WalkerWalker 2014). Harvard University’s Nieman Journalism Lab has argued that there will be a large-scale shift toward the “botification” of the news in coming years (Reference BarotBarot 2016). Others, with more unease about automated reporting, have suggested that journalism may be the latest industry to be under threat by automation – because of article writing bots – or that algorithms may “kill” journalism (Reference GoichmanGoichman 2017; Reference KeohaneKeohane 2017). In a case study of three newsrooms, however, Reference LindenLinden (2017) finds that the use of automated software has actually benefited reporters in that bots do the repetitive tasks journalists would otherwise have to do – thus freeing people up for other work. Reference LatarLatar (2018) provides a balanced view of both perspectives, what he terms the pessimistic and optimistic stances on robot journalists. He explores several case studies, including the LA Times, that exemplify both stances.

Democratically beneficial bots provide hopeful foils to their political bot counterparts. In his 2018 hearings before the US Congress (Reference HarwellHarwell 2018), which took place because of political misuse of Facebook in 2016, Facebook CEO Mark Zuckerberg spoke of automation and AI as necessary tools to combat the rise of disinformation and misinformation. He pointed out that the sheer informational scale of social media makes it so that human labor alone cannot address the problems at hand. Some researchers have taken up this logic, suggesting that automation may have a role to play in preventing misuse of bots (Reference Wang, Foresti and JajodiaWang 2010). Indeed, many bot-detection systems rely on sophisticated algorithms and machine learning (Reference Ratkiewicz, Conover and MeissRatkiewicz et al. 2011; Reference McKelvey and MenczerMcKelvey and Menczer 2013; Reference Klyueva, Chiluwa and SamoilenkoKlyueva 2019).

In order to prevent the political misuse of bots over social media, it is crucial to understand the complex ways in which bots facilitate and amplify the flow of misinformation, disinformation, trolling, and propaganda. The next section provides an overview of literature on computational propaganda – one of the umbrella terms and strains of research from the social sciences that attempts to grapple with the problem of political bots.

Understanding Computational Propaganda

Computational propaganda is specifically defined as “the assemblage of social media platforms, autonomous agents, and big data tasked with the manipulation of public opinion” (Reference Woolley and HowardWoolley and Howard 2016a). Research on computational propaganda spans the social sciences and computer sciences. Although the Computational Propaganda Project at the University of Oxford’s Oxford Internet Institute defined the term and carried out early social scientific work on the topic, the research from Reference Ferrara, Varol, Davis, Menczer and FlamminiFerrara et al. (2016), Reference Metaxas and MustafarajMetaxas and Mustafaraj (2012), Reference Ratkiewicz, Conover and MeissRatkiewicz et al. (2011), and others was foundational in building the preliminary understandings of how social bots effected the informational and computational systems that make social media possible.

Early research on computational propaganda was focused primarily on how powerful political actors leveraged social media bots for control. A great deal of this work looked at how governments, militaries, political campaigns, corporations, and other well-resourced entities launched such offensives online (Reference Murthy, Powell and TinatiMurthy et al. 2016). Now, however, it is clear that many types of groups – including regular citizens and activists – use the tools and tactics of computational propaganda to spread their own perspectives and communicate politically (Reference WoolleyWoolley 2018).

Indeed, computational propaganda has been propelled by a broader scale normalization of social media as a means for control by those focused on digital political communication (Reference KarpfKarpf 2012). As Reference ChadwickChadwick (2013) points out, “even the most radical changes to communications systems must be channeled through structural constraints in order to impact traditional political outcomes” (p. 10). Many old guard members of the political elite remain the same worldwide, and newly powerful individuals and groups have ascended and now make use of digital tools in efforts to gain and retain power (Reference HowardHoward 2015). As Karpf (Reference Karpf2012) points out, these actors have adjusted to, and made use of, the altered state of political communication tools as they exist online. In some ways, digital democracy has not played out as cyber-optimists had hoped. The elite on the Internet are still elite and thus “online speech follows winner-take-all patterns” (Reference HindmanHindman 2008).

Sociotechnical innovation has led to ever-changing organizational affordances of the multimedia landscape encompassed by social media, both for the elite and for regular people (Reference Treem and LeonardiTreem and Leonardi 2013). The rise of hybridized technology and “networked society” has not only affected the way political conversations occur; it has also altered the ways campaigns are organized, elections function, and power is exerted (Reference BenklerBenkler 2006). New political organizations have been birthed, political systems have changed, and politicians have risen and fallen. Some aspects of political communication, however, remain constant. Computational propaganda is a novel mechanism and strategy for enabling control among well-resourced and powerful groups, though the means to build and launch bots over social media is becoming more widespread – and available to regular citizens – everyday (Reference WoolleyWoolley 2018).

Nimmo and the Digital Forensic Research (DFR) team at the Atlantic Council point out three core features of political bots and computational propaganda, which he claims separate it from traditional propaganda: activity, amplification, and anonymity (Reference NimmoNimmo and DFR Lab 2016). He writes that:

Many of these bot and cyborg accounts do conform to a recognizable pattern: activity, amplification, anonymity. An anonymous account which is inhumanly active and which obsessively amplifies one point of view is likely to be a political bot, rather than a human. Identifying such bots is the first step towards defeating them. (n.p.)

In order to “defeat” political bots, or broader manipulations that occur by way of algorithms and automation online, researchers have argued that social media firms must accept greater responsibility for the social and political outcomes of the tools they build and design – the algorithms, but also the very concept of platforms, have politics (Reference GillespieGillespie 2010; Reference Gillespie, Gillespie, Boczkowski and FootGillespie 2014).

Indeed, prominent scholars of political communication argue that social media platforms such as Twitter and Facebook are now crucial transnational communication mechanisms for political communication (Reference Segerberg and BennettSegerberg and Bennett 2011; Reference Tufekci and WilsonTufekci and Wilson 2012). That is, their use in this regard – at least in most country cases worldwide – is not necessarily restricted by state borders. As such, people from around the globe use them to communicate about political issues with one another. As the research detailed in this chapter reveals, computational propaganda itself is also transnational. It is not confined to only one social media site but stretches across them in a tangled web (Council on Foreign Relations 2018). Revelations from Facebook, Twitter, and Google reveal, for instance, that government-sponsored Russian citizens used multiple social media sites to spread propagandistic content during the US presidential election (Reference Allcott and GentzkowAllcott and Gentzkow 2017).

Research from several sources suggests that computational propaganda and political bot usage was at an all-time high during key moments of this particular election (Reference Bessi and FerraraBessi and Ferrara 2016; Reference Ferrara, Varol, Davis, Menczer and FlamminiFerrara et al. 2016; Reference Howard, Kollanyi and WoolleyHoward et al. 2016). Reference Bessi and FerraraBessi and Ferrara (2016) found that “about 400,000 bots [were] engaged in the political discussion about the [US] Presidential election, responsible for roughly 3.8 million tweets, about one-fifth of the entire conversation.” Reference Kollanyi, Howard and WoolleyKollanyi et al. (2016) found that the thousands of Twitter bots supporting Trump outnumbered those supporting Clinton in the days preceding the election at a rate of five to one. Supporters of both candidates used these social automatons to give voters the impression that the campaigns had large-scale online grassroots support, to plant ideas in the news cycle, and to effect trends on digital platforms. At times during the 2016 US contest, regular people were convinced and even tricked by fringe partisans into using bots, coordinated hashtag bombing and other tools to bolster trending topics that benefited candidates and campaigns (Reference SchreckingerSchreckinger 2016).

The concept of “manufacturing consensus” is a central tactic of those who use computational propaganda (Reference WoolleyWoolley 2018). It occurs not only when bots boost social media metrics but also when the media reinforces illusory notions of candidate popularity because of this same automated inflation of the numbers. The concept of manufacturing consensus is drawn not just from the ways bots and computational propaganda were used during the 2016 US election but also from a range of parallel uses in multiple countries dating back as early as 2007 (Reference RobbRobb 2007; Reference GorwaGorwa 2017, p. 38). In the Americas, political actors in Mexico (Reference Verkamp and GuptaVerkamp and Gupta 2013), Ecuador (Reference WoolleyWoolley 2015), Venezuela (Reference Forelle, Howard, Monroy-Hernandez and SavageForelle et al. 2015), and Brazil (Reference ArnaudoArnaudo 2017) have pioneered the deployment of related techniques in attempts to boost the credibility of candidates and campaigns through increased metrics: follows, likes, retweets, shares, comments, and so on.

State-sponsored trolling, another novel online political manipulation strategy, is specifically concerned with governmentally driven computational propaganda campaigns aimed at attacking political opposition over social media (Reference Monaco and NyssMonaco and Nyss 2018; Reference Zannettou, Caulfield, Setzer, Sirivianos, Stringhini and BlackburnZannettou et al. 2019). Analysis from Monaco and Nyss on this phenomenon notes that the now global trend of “disinformation is often only one element of a broader politically motivated attack on the credibility and courage of dissenting voices: journalists, opposition politicians and activists” (Reference Monaco and NyssMonaco and Nyss 2018, p. 4). According to their study, informed by more than two years of research and interviews and fieldwork spanning more than eight countries, the use of computational propaganda and astroturf political communication is not only propagated by organizations tangentially associated with governments, such as the IRA in Russia. Politically motivated trolling teams, in some cases outfitted with armies of political bots, are housed within the official governmental infrastructure of some countries. A recent report on cyber-troops by Reference Bradshaw and HowardBradshaw and Howard (2017) corroborates these findings and details the ways in which multiple governments now use the tools of computational propaganda.

Increasingly, computational propaganda and the tactic of using political bots to influence online conversation are moving from the political sphere to other topical areas. In healthcare, Twitter bots have been used to amplify anti-vaccine content (Reference Shao, Ciampaglia, Varol, Flammini and MenczerShao et al. 2017; Reference Allem and FerraraAllem and Ferrara 2018). In fact, coordinated Russian bots and human trolls have been used to manipulate the vaccine debate online (Reference Broniatowski, Jamison and QiBroniatowski et al. 2018). Social media bots, alongside groups of people, have also played a role in distributing information and, worryingly, misinformation and rumors during natural disasters and terrorist attacks (Reference Gupta, Lamba, Kumaraguru and JoshiGupta et al. 2013; Reference Starbird, Maddock, Orand, Achterman and MasonStarbird et al. 2014; Reference Khaund, Al-Khateeb, Tokdemir, Agarwal, Thomson, Dancy, Hyder and BisginKhaund et al. 2018; Reference Vosoughi, Roy and AralVosoughi et al. 2018). Twitter bots are also used to create a significant amount of tweets linking to scientific articles, creating serious implications for using raw counts of such messages for the evaluation or assessment of the reach or successful uptake of research over that platform (Reference Haustein, Bowman, Holmberg, Tsou, Sugimoto and LarivièreHaustein et al. 2016).

Computational propaganda shows no signs of abating, and there is a great deal of research to be done in order to build thorough understandings of the topic. The next section identifies gaps in the current research and suggestions for future work.

Conclusion and Gaps

Computational propaganda is still propaganda. What has changed about this new form of an old strategy of control and coercion is that it now happens at scale. This new political communication strategy is scaled not just in terms of the number of people in places around the world that can and do access computational propaganda but also by the computational power and constantly advancing software – including political bots but also data analytics technology including machine learning and sentiment analysis – that facilitate it.

Several areas are at the forefront of innovation, and future problems, associated with political bot and computational propaganda usage. The first, both in the United States and globally, is policy and the law. How will these political domains be affected by the rise in political manipulation over social media? What laws are needed to regulate firms where disinformation is spread? The academy must aid policymakers by undertaking more empirical research to inform policy recommendations to be delivered to key US politicians, policy experts, civil society groups, and journalists. Campaign finance, election law, voting rights, privacy, and several other areas of the law are currently being affected in both unforeseen and complex ways by the spread of political disinformation over social media. Solid research into the ways computational propaganda contravenes the law is a crucial step in addressing the policy gap at the intersection of information dissemination, automation, social media, and politics.

We need better software, informed by both social and computer science research, to help researchers, journalists, and activists keep up with the challenges posed by the modern disinformation threat. Tools could include high-powered data intelligence platforms, that make use of bots in parsing large sets of relevant data, usable by these groups worldwide. They ought to exploit recent advances in graph databases and machine learning and cheap, massive computation to dramatically accelerate investigations. The target should be to accelerate civil society in identifying patterns of activity that would help to root out entities backing disinformation campaigns, in addition to uncovering a great deal about when and where these campaigns are occurring.

Longitudinal work can help establish more solid metrics for tracking information flows – but also effects – related to the use of political bots, computational propaganda, and, correspondingly, disinformation and online polarization. Quantitative insight into the roles of automation, network structure, temporal markers, and message semantics over social media can allow experienced researchers to effectively create ways of measuring the flow of political manipulation over social media over sustained periods. The results of longitudinal research on this phenomenon will be crucial to building evolving long-term public and governmental understandings of computational propaganda.

There is potential for the continued use of bots as technologies for democratic engagement. There are also ongoing efforts in the academy to develop software to detect malicious bots and disinformation. Research-grounded tools to detect bots on social media, led by the team at Indiana University that developed Botometer (previously BotOrNot), are on the rise and are becoming more effective (Reference Varol, Davis, Menczer, Flammini, Dong and LiuVarol et al. 2018). Other groups are developing tools to study how disinformation, or fake news, is spread and whether or not a tweet is credible (Reference Gupta, Kumaraguru, Castillo, Meier, Aiello and McFarlandGupta et al. 2014; Reference Shao, Ciampaglia, Flammini and MenczerShao et al. 2016). Start-ups including RoBhat Labs are simultaneously creating browser plug-ins and apps that track both bots and propaganda (Reference SmileySmiley 2017). As Reference Varol and UluturkVarol and Uluturk (2018) aptly point out, however, “we should also be aware of the limitations of human-mediated systems as well as algorithmic approaches and employ them wisely and appropriately to tackle weaknesses of existing communication systems.” Software solutions, no matter how sophisticated the technology, can only mitigate a portion of the problems intrinsic to computational propaganda. Social solutions must be implemented as well.

References

Abokhodair, N., Yoo, D., & McDonald, D. W. (2015). Dissecting a social botnet: Growth, content and influence in Twitter. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 839851). Vancouver: ACM. https://doi.org/10.1145/2675133.2675208Google Scholar
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. NBER Working Paper No. 23089. https://doi.org/10.3386/w23089CrossRefGoogle Scholar
Allem, J.-P., & Ferrara, E. (2018). Could social bots pose a threat to public health? American Journal of Public Health, 108(8), 10051006. https://doi.org/10.2105/AJPH.2018.304512Google Scholar
Arnaudo, D. (2017). Computational propaganda in Brazil: Social bots during elections. Computational Propaganda Research Project Working Paper Series. Oxford: Oxford Internet Institute.Google Scholar
Badawy, A., Lerman, K., & Ferrara, E. (2018). Who falls for online political manipulation? arXiv.org. http://arxiv.org/abs/1808.03281Google Scholar
Barot, T. (2016). The botification of news. Nieman Lab, December. www.niemanlab.org/2015/12/the-botification-of-news/Google Scholar
Baumgarten, R., Colton, C., & Morris, M. (2009). Combining AI methods for learning bots in a real-time strategy game. International Journal of Computer Games Technology. www.hindawi.com/journals/ijcgt/2009/129075/abs/CrossRefGoogle Scholar
Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.Google Scholar
Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 U.S. presidential election online discussion. First Monday, 21(11). https://doi.org/10.5210/fm.v21i11.7090Google Scholar
Boxell, L., Gentzkow, M., & Shapiro, J. M. (2017). Is the Internet causing political polarization? Evidence from demographics. NBER Working Paper No. 23258. https://doi.org/10.3386/w23258Google Scholar
Bradshaw, S., & Howard, P. (2017). Troops, trolls and troublemakers: A global Inventory of organized social media. Computational Propaganda Research Project Working Paper Series. Oxford: Oxford Internet Institute.Google Scholar
Bromwich, J. E. (2018). Bots of the Internet, reveal yourselves! New York Times, July 18. www.nytimes.com/2018/07/16/style/how-to-regulate-bots.htmlGoogle Scholar
Broniatowski, D. A., Jamison, A. M., Qi, S. et al. (2018). Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American Journal of Public Health, 108(10), 13781384. https://doi.org/10.2105/AJPH.2018.304567CrossRefGoogle ScholarPubMed
Castillo, S., Allende-Cid, H., Palma, W. et al. (2019). Detection of bots and cyborgs in Twitter: A study on the Chilean presidential election in 2017. In Meiselwitz, G. (Ed.), Social Computing and Social Media: Design, Human Behavior and Analytics (pp. 311323). Basel: Springer International Publishing.CrossRefGoogle Scholar
Chadwick, A. (2013). The Hybrid Media System: Politics and Power. Oxford: Oxford University Press.CrossRefGoogle Scholar
Chatterjee, A., Gupta, U., Chinnakotla, M. K., Srikanth, R., Galley, M., & Agrawal, P. (2019). Understanding emotions in text using deep learning and big data. Computers in Human Behavior, 93, 309317. https://doi.org/10.1016/j.chb.2018.12.029Google Scholar
Chu, Z., Gianvecchio, S., Wang, H., & Jajodia, S. (2010). Who is tweeting on Twitter: Human, bot, or cyborg? In Proceedings of the 26th Annual Computer Security Applications Conference (pp. 2130). Austin, TX: ACM. http://dl.acm.org/citation.cfm?id=1920265Google Scholar
Copley, C. (2016). Merkel fears social bots may manipulate German election. Reuters, November 24. https://uk.reuters.com/article/uk-germany-merkel-socialbots-idUKKBN13J1V2Google Scholar
Council on Foreign Relations. (2018). Political disruptions: Combating disinformation and fake news. Council on Foreign Relations website. www.cfr.org/event/political-disruptions-combating-disinformation-and-fake-newsGoogle Scholar
Cresci, S., Di Pietro, R., Petrocchi, M., Spognardi, A., & Tesconi, M. (2017). The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race. In Proceedings of the 26th International Conference on World Wide Web Companion (pp. 963972). Perth: ACM. https://doi.org/10.1145/3041021.3055135Google Scholar
Davis, C. A., Varol, O., Ferrara, E., Flammini, A., & Menczer, F. (2016). BotOrNot: A system to evaluate social bots. In Proceedings of the 25th International Conference Companion on World Wide Web (pp. 273274). Geneva: ACM. https://doi.org/10.1145/2872518.2889302Google Scholar
Dubois, E., & McKelvey, F. R. (2019). Political bots: Disrupting Canada’s democracy. Canadian Journal of Communication, 44(2). https://doi.org/10.22230/cjc.2019v44n2a3511CrossRefGoogle Scholar
Facebook. (2017). Facebook, Inc. Quarterly Report September 20, 2017 (10-Q No. 001–35551). Securities and Exchange Commission. www.sec.gov/Archives/edgar/data/1326801/000132680117000053/fb-09302017x10q.htmGoogle Scholar
Ferrara, E. (2017). Disinformation and social bot operations in the run up to the 2017 French presidential election. SSRN Scholarly Paper No. ID 2995809. https://papers.ssrn.com/abstract=2995809Google Scholar
Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96104. https://doi.org/10.1145/2818717Google Scholar
Følstad, A., Brandtzaeg, P. B., Feltwell, T., Law, E. L. C., Tscheligi, M., & Luger, E. (2018). Chatbots for social good. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. Paper No. SIG06.CrossRefGoogle Scholar
Ford, H., Dubois, E., & Puschmann, C. (2016). Keeping Ottawa honest – one tweet at a time? Politicians, journalists, wikipedians, and their twitter bots. International Journal of Communication, 10, 48914914.Google Scholar
Forelle, M. C., Howard, P. N., Monroy-Hernandez, A., & Savage, S. (2015). Political bots and the manipulation of public opinion in Venezuela. SSRN Scholarly Paper No. ID 2635800. https://papers.ssrn.com/abstract=2635800Google Scholar
Garber, M. (2014). When PARRY met ELIZA: A ridiculous chatbot conversation from 1972. The Atlantic, June 9. www.theatlantic.com/technology/archive/2014/06/when-parry-met-eliza-a-ridiculous-chatbot-conversation-from-1972/372428/Google Scholar
Geiger, R. S. (2014). Bots, bespoke code, and the materiality of software platforms. Information, Communication & Society, 17(3), 342356.CrossRefGoogle Scholar
Gillespie, T. (2010). The politics of “platforms.New Media & Society, 12(3), 347364. https://doi.org/10.1177/1461444809342738Google Scholar
Gillespie, T. (2014). The relevance of algorithms. In Gillespie, T., Boczkowski, P. J., & Foot, K. A. (Eds.), Media Technologies: Essays on Communication, Materiality, and Society (pp. 167193). Cambridge, MA: MIT Press.Google Scholar
Goichman, R. (2017). Written by a robot: Will algorithms kill journalism? Haaretz, February 15. www.haaretz.com/israel-news/business/1.771758Google Scholar
Gonzales, H. M. S., & González, M. S. (2017). Bots as a news service and its emotional connection with audiences: The case of Politibot. Doxa Comunicación. Revista Interdisciplinar de Estudios de Comunicación y Ciencias Sociales, 0(25), 6384.Google Scholar
Gorwa, R. (2017). Computational propaganda in Poland: False amplifiers and the digital public sphere. Computational Propaganda Research Project Working Paper Series. Oxford: Oxford Internet Institute.Google Scholar
Gorwa, R., & Guilbeault, D. (2018). Unpacking the social media bot: A typology to guide research and policy. Policy & Internet. https://doi.org/10.1002/poi3.184CrossRefGoogle Scholar
Grimme, C., Preuss, M., Adam, L., & Trautmann, H. (2017). Social bots: Human-like by means of human control? Big Data, 5(4), 279293. https://doi.org/10.1089/big.2017.0044Google Scholar
Gupta, A., Kumaraguru, P., Castillo, C., & Meier, P. (2014). TweetCred: Real-time credibility assessment of content on Twitter. In Aiello, L. M. & McFarland, D. (Eds.), Proceedings of Social Informatics (SocInfo): 6th International Conference (pp. 228243). Barcelona: SocInfo. https://doi.org/10.1007/978-3-319-13734-6_16Google Scholar
Gupta, A., Lamba, H., Kumaraguru, P., & Joshi, A. (2013). Faking Sandy: Characterizing and identifying fake images on Twitter during Hurricane Sandy. In Proceedings of the 22nd International Conference on the World Wide Web 2013 (pp. 729736). Rio de Janeiro: ACM.Google Scholar
Harwell, D. (2018). AI will solve Facebook’s most vexing problems, Mark Zuckerberg says. Just don’t ask when or how. Washington Post, April 11. www.washingtonpost.com/news/the-switch/wp/2018/04/11/ai-will-solve-facebooks-most-vexing-problems-mark-zuckerberg-says-just-dont-ask-when-or-how/Google Scholar
Haustein, S., Bowman, T. D., Holmberg, K., Tsou, A., Sugimoto, C. R., & Larivière, V. (2016). Tweets as impact indicators: Examining the implications of automated “bot” accounts on Twitter. Journal of the Association for Information Science and Technology, 67(1), 232238. https://doi.org/doi:10.1002/asi.23456Google Scholar
Hindman, M. (2008). The Myth of Digital Democracy. Princeton: Princeton University Press.Google Scholar
Holz, T. (2005). A short visit to the bot zoo [malicious bots software]. IEEE Security Privacy, 3(3), 7679. https://doi.org/10.1109/MSP.2005.58CrossRefGoogle Scholar
Howard, P. N. (2015). Pax Technica: The Impact of Automation on Public Opinion. New York: Yale University Press.Google Scholar
Howard, P. N., Kollanyi, B., & Woolley, S. C. (2016). Bots and automation over Twitter during the US election. Computational Propaganda Research Project Working Paper Series. Oxford: Oxford Internet Institute.Google Scholar
Hwang, T., Pearce, I., & Nanis, M. (2012). Socialbots: Voices from the fronts. Interactions, 19(2), 3845.Google Scholar
Incapsula. (2015). 2015 Bot Traffic Report. www.incapsula.com/blog/bot-traffic-report-2015.htmlGoogle Scholar
Karpf, D. (2012). The MoveOn effect: The Unexpected Transformation of American Political Advocacy. Oxford: Oxford University Press.CrossRefGoogle Scholar
Keohane, J. (2017). A robot may have written this story. Wired, February 16. www.wired.com/2017/02/robots-wrote-this-story/Google Scholar
Khaund, T., Al-Khateeb, S., Tokdemir, S., & Agarwal, N. (2018). Analyzing social bots and their coordination during natural disasters. In Thomson, R., Dancy, C., Hyder, A., & Bisgin, H. (Eds.), Social, Cultural, and Behavioral Modeling (pp. 207212). Basel: Springer International Publishing.CrossRefGoogle Scholar
Klyueva, A. (2019). Trolls, bots, and whatnots: Deceptive content, deception detection, and deception suppression. In Chiluwa, I. & Samoilenko, S. (Eds.), Handbook of Research on Deception, Fake News, and Misinformation Online (pp. 1832). Hershey, PA: IGI Global. https://doi.org/10.4018/978-1-5225-8535-0.ch002CrossRefGoogle Scholar
Kollanyi, B., Howard, P. N., & Woolley, S. C. (2016). Bots and automation over Twitter during the U.S. election. Computational Propaganda Research Project Working Paper Series. Oxford: Oxford Internet Institute.Google Scholar
Kramer, A., Guillory, J., & Hancock, J. (2015). Experimental evidence of massive-scale emotional contagion through social networks. PNAS, 111(24), 87888790. https://doi.org/10.1073/pnas.1320040111CrossRefGoogle Scholar
Kumar, S., Cheng, J., Leskovec, J., & Subrahmanian, V. S. (2017). An army of me: Sockpuppets in online discussion communities. In Proceedings of the 26th International Conference on World Wide Web (pp. 857866). Perth: ACM. https://doi.org/10.1145/3038912.3052677CrossRefGoogle Scholar
Larsson, A. O., & Hallvard, M. (2015). Bots or journalists? News sharing on Twitter. Communications, 40(3), 361370. https://doi.org/10.1515/commun-2015-0014Google Scholar
Latar, N. L. (2018). Robot Journalism: Can Human Journalism Survive? Singapore: World Scientific.Google Scholar
Lee, K., Caverlee, J., & Webb, S. (2010). The social honeypot project: Protecting online communities from spammers. In Proceedings of the 19th International Conference on World Wide Web (pp. 11391140). Raleigh, NC: ACM. https://doi.org/10.1145/1772690.1772843Google Scholar
Lemelshtrich, L. N. (2018). Robot Journalism: Can Human Journalism Survive? Singapore: World Scientific.Google Scholar
Leonard, A. (1998). Bots: The Origin of a New Species. New York: Penguin Books.Google Scholar
Linden, T. C.-G. (2017). Algorithms for journalism. The Journal of Media Innovations, 4(1), 6076. https://doi.org/10.5617/jmi.v4i1.2420CrossRefGoogle Scholar
Llewellyn, C., Cram, L., Hill, R. L., & Favero, A. (2019). For whom the bell trolls: Shifting troll behaviour in the Twitter Brexit debate. JCMS: Journal of Common Market Studies. https://doi.org/10.1111/jcms.12882CrossRefGoogle Scholar
Lokot, T., & Diakopoulos, N. (2016). News bots: Automating news and information dissemination on Twitter. Digital Journalism, 4(6), 682699. https://doi.org/10.1080/21670811.2015.1081822CrossRefGoogle Scholar
Long, K., Vines, J., Sutton, S. et al. (2017). “Could you define that in bot terms”?: Requesting, creating and using bots on Reddit. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 34883500). Denver: ACM. https://doi.org/10.1145/3025453.3025830Google Scholar
Luceri, L., Deb, A., Badawy, A., & Ferrara, E. (2019). Red bots do it better: Comparative analysis of social bot partisan behavior. In WWW ’19: Companion Proceedings of The 2019 World Wide Web Conference (pp. 10071012). San Francisco: ACM. https://doi.org/10.1145/3308560.3316735CrossRefGoogle Scholar
Maréchal, N. (2016). Automation, algorithms, and politics| When bots tweet: Toward a normative framework for bots on social networking sites (Feature). International Journal of Communication, 10, 50225031.Google Scholar
Martineau, P. (2018). What is a Bot? Wired, November 16. www.wired.com/story/the-know-it-alls-what-is-a-bot/Google Scholar
Marwick, A., & Lewis, B. (2017). Media Manipulation and Disinformation Online. New York: Data & Society Research Institute.Google Scholar
McKelvey, K. R., & Menczer, F. (2013). Truthy: Enabling the study of online social networks. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work Companion (pp. 2326). San Antonio: ACM. http://dl.acm.org/citation.cfm?id=2441962Google Scholar
Metaxas, P. T., & Mustafaraj, E. (2012). Social media and the elections. Science, 338(6106), 472473.CrossRefGoogle ScholarPubMed
Metaxas, P. T., Mustafaraj, E., & Gayo-Avello, D. (2011). How (not) to predict elections. In Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Conference on Social Computing (SocialCom) (pp. 165171). Boston: IEEE. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6113109Google Scholar
Middlebrook, S. T., & Muller, J. (2000). Thoughts on bots: The emerging law of electronic agents. Business Lawyer, 56, 341.Google Scholar
Monaco, N. (2017). Computational propaganda in Taiwan: Where digital democracy meets automated autocracy. Computational Propaganda Research Project Working Paper Series. Oxford: Oxford Internet Institute.Google Scholar
Monaco, N., & Nyss, C. (2018). State sponsored trolling: How governments are deploying fake news as part of broader harassment campaigns. Institute for the Future Working Research Papers.Google Scholar
Morstatter, F., Wu, L., Nazer, T. H., Carley, K. M., & Liu, H. (2016). A new approach to bot detection: Striking the balance between precision and recall. In 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 533540). Davis, CA: IEEE/ACM. https://doi.org/10.1109/ASONAM.2016.7752287CrossRefGoogle Scholar
Motti, J. (2014). Twitter acknowledges 23 million active users are actually bots. Tech Times, August 12. www.techtimes.com/articles/12840/20140812/twitter-acknowledges-14-percent-users-bots-5-percent-spam-bots.htmGoogle Scholar
Murthy, D., Powell, A., Tinati, R. et al. (2016). Can bots influence a political discussion? Social capital, technical skill, and conversations about public affairs. International Journal of Communication, 10(Special Issue), 20.Google Scholar
Mutton, P. (2004). Inferring and visualizing social networks on Internet relay chat. In Proceedings of the Eighth International Conference on Information Visualisation, 2004 (pp. 3543). London: IEEE. https://doi.org/10.1109/IV.2004.1320122Google Scholar
Neff, G., & Nagy, P. (2016). Automation, algorithms, and politics| talking to Bots: Symbiotic agency and the case of Tay. International Journal of Communication, 10, 49154931.Google Scholar
Nimmo, B., & DFR Lab. (2016). Human, bot or cyborg? Medium, December 23. https://medium.com/@DFRLab/human-bot-or-cyborg-41273cdb1e17Google Scholar
Orcutt, M. (2012). Twitter mischief plagues Mexico’s election. MIT Technology Review, June 21. www.technologyreview.com/news/428286/twitter-mischief-plagues-mexicos-election/Google Scholar
Paavola, J., Helo, T., Jalonen, H., Sartonen, M., & Huhtinen, A.-M. (2016). Understanding the trolling phenomenon: The automated detection of bots and cyborgs in the social media. Journal of Information Warfare, 15(4), 100111.Google Scholar
Ratkiewicz, J., Conover, M., Meiss, M., Goncalves, B., Flammini, A., & Menczer, F. (2011). Detecting and tracking political abuse in social media. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media (ICWSM). Barcelona: AAAI Press. www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/viewFile/2850/3274Google Scholar
Ratkiewicz, J., Conover, M., Meiss, M. et al. (2011). Truthy: Mapping the spread of astroturf in microblog streams. In Proceedings of the 20th International Conference Companion on World Wide Web (pp. 249252). Hyderabad: ACM. http://dl.acm.org/citation.cfm?id=1963301CrossRefGoogle Scholar
Robb, J. (2007). When bots attack. Wired, August 23. www.wired.com/2007/08/ff-estonia-bots/Google Scholar
Sample, M. (2015). Protest bots. In Karhio, A., Ramada Prieto, L., & Rettberg, S. (Eds.), The Ends of Electronic Literature (p. 58). Bergen: Electronic Literature Organization and University of Bergen.Google Scholar
Sanovich, S., Stukal, D., & Tucker, J. A. (2018). Turning the virtual tables: Government strategies for addressing online opposition with an application to Russia. Comparative Politics, 50(3), 435482. https://doi.org/info:doi/10.5129/001041518822704890Google Scholar
Schreckinger, B. (2016). Inside Trump’s “cyborg” Twitter army. Politico, September 30. http://politi.co/2dyhCD0Google Scholar
Segerberg, A., & Bennett, W. L. (2011). Social media and the organization of collective action: Using Twitter to explore the ecologies of two climate change protests. The Communication Review, 14(3), 197215. https://doi.org/10.1080/10714421.2011.597250CrossRefGoogle Scholar
Seymour, T., Frantsvog, D., & Kumar, S. (2011). History of search engines. International Journal of Management & Information Systems (IJMIS), 15(4), 4758. https://doi.org/10.19030/ijmis.v15i4.5799CrossRefGoogle Scholar
Shao, C., Ciampaglia, G. L., Flammini, A., & Menczer, F. (2016). Hoaxy: A platform for tracking online misinformation. arXiv.org. https://doi.org/10.1145/2872518.2890098Google Scholar
Shao, C., Ciampaglia, G. L., Varol, O., Flammini, A., & Menczer, F. (2017). The spread of low-credibility content by social bots. ArXiv. https://arxiv.org/abs/1707.07592Google Scholar
Shao, C., Ciampaglia, G. L., Varol, O., Yang, K.-C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1), 19. https://doi.org/10.1038/s41467-018-06930-7Google Scholar
Smiley, L. (2017). The college kids doing what Twitter won’t | Backchannel. Wired, November 1. www.wired.com/story/the-college-kids-doing-what-twitter-wont/Google Scholar
Starbird, K., Maddock, J., Orand, M., Achterman, P., & Mason, R. M. (2014). Rumors, false flags, and digital vigilantes: Misinformation on Twitter after the 2013 Boston Marathon bombing. In Proceedings of iConference 2014. Berlin: iSchools Inc. https://doi.org/10.9776/14308Google Scholar
Stella, M., Ferrara, E., & De Domenico, M. (2018). Bots increase exposure to negative and inflammatory content in online social systems. Proceedings of the National Academy of Sciences, 115(49), 1243512440. https://doi.org/10.1073/pnas.1803470115Google Scholar
Stieglitz, S., Brachten, F., Ross, B., & Jung, A.-K. (2017). Do social bots dream of electric sheep? A categorisation of social media bot accounts. ArXiv. http://arxiv.org/abs/1710.04044Google Scholar
Stukal, D., Sanovich, S., Bonneau, R., & Tucker, J. A. (2017). Detecting bots on Russian political Twitter. Big Data, 5(4), 310324. https://doi.org/10.1089/big.2017.0038CrossRefGoogle ScholarPubMed
Suárez-Gonzalo, S., Mas-Manchón, L., & Guerrero-Solé, F. (2019). Tay is you: The attribution of responsibility in the algorithmic culture. Observatorio (OBS*), 13(2). https://doi.org/10.15847/obsOBS13220191432Google Scholar
Treem, J. W., & Leonardi, P. M. (2013). Social media use in organizations: Exploring the affordances of visibility, editability, persistence, and association. Annals of the International Communication Association, 36(1), 143189. https://doi.org/10.1080/23808985.2013.11679130CrossRefGoogle Scholar
Treré, E. (2016). The dark side of digital politics: Understanding the algorithmic manufacturing of consent and the hindering of online dissidence. IDS Bulletin, 47(1). https://doi.org/10.19088/1968-2016.111Google Scholar
Tsvetkova, M., García-Gavilanes, R., Floridi, L., & Yasseri, T. (2017). Even good bots fight: The case of Wikipedia. PLoS ONE, 12(2), e0171774. https://doi.org/10.1371/journal.pone.0171774Google Scholar
Tucker, J. A., Guess, A., Barberá, P. et al. (2018). Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature. Hewlett Foundation report. https://hewlett.org/library/social-media-political-polarization-political-disinformation-review-scientific-literature/Google Scholar
Tufekci, Z., & Wilson, C. (2012). Social media and the decision to participate in political protest: observations from Tahrir Square. Journal of Communication, 62(2), 363379. https://doi.org/10.1111/j.1460-2466.2012.01629.xCrossRefGoogle Scholar
Uyheng, J., & Carley, K. M. (2019). Characterizing bot networks on Twitter: An empirical analysis of contentious issues in the Asia-Pacific. In Thomson, R., Bisgin, H., Dancy, C., & Hyder, A. (Eds.), Social, Cultural, and Behavioral Modeling (pp. 153162). Basel: Springer International Publishing.CrossRefGoogle Scholar
Varol, O., Davis, C., Menczer, F., & Flammini, A. (2018). Feature engineering for social bot detection. In Dong, G. & Liu, H. (Eds.), CRC Press: Feature Engineering for Machine Learning and Data Analytics (pp. 331334). Boca Raton, FL: Taylor & Francis.Google Scholar
Varol, O., Ferrara, E., Davis, C. A., Menczer, F., & Flammini, A. (2017). Online human-bot interactions: Detection, estimation, and characterization. In Proceedings of the Eleventh International AAAI Conference on Web and Social Media (ICWSM). Montreal: AAAI Press. www.aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15587Google Scholar
Varol, O., & Uluturk, I. (2018). Deception strategies and threats for online discussions. First Monday, 23(5). www.firstmonday.dk/ojs/index.php/fm/article/view/7883Google Scholar
Verkamp, J. P., & Gupta, M. (2013). Five incidents, one theme: Twitter spam as a weapon to drown voices of protest. Paper Presented at the USENIX Workshop on Free and Open Communications on the Internet (FOCI), August 13, Washington, DC.Google Scholar
Vincent, J. (2016). Twitter taught Microsoft’s friendly AI chatbot to be a racist asshole in less than a day. The Verge, March 24. www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racistGoogle Scholar
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 11461151. https://doi.org/10.1126/science.aap9559Google Scholar
Wagner, C., Mitter, S., Körner, C., & Strohmaier, M. (2012). When social bots attack: Modeling susceptibility of users in online social networks. In Proceedings of the WWW, 12. http://www2012.org/proceedings/nocompanion/MSM2012_paper_11.pdfGoogle Scholar
Walker, A. (2014). Quakebot: An algorithm that writes the news about earthquakes. Gizmodo, March 19. https://gizmodo.com/quakebot-an-algorithm-that-writes-the-news-about-earth-1547182732Google Scholar
Wang, A. H. (2010). Detecting spam bots in online social networking sites: A machine learning approach. In Foresti, S. & Jajodia, S. (Eds.), Data and Applications Security and Privacy XXIV (pp. 335342). Berlin: Springer.CrossRefGoogle Scholar
Washington Post Staff. (2017). Full transcript: Sally Yates and James Clapper testify on Russian election interference. Washington Post, May 8. www.washingtonpost.com/news/post-politics/wp/2017/05/08/full-transcript-sally-yates-and-james-clapper-testify-on-russian-election-interference/Google Scholar
Weld, D. S., & Etzioni, O. (1995). Intelligent agents on the Internet: Fact, fiction, and forecast. IEEE Intelligent Systems, 10(4), 4449.Google Scholar
West, D. M. (2017). How to combat fake news and disinformation. Brookings, December 18. www.brookings.edu/research/how-to-combat-fake-news-and-disinformation/Google Scholar
Woolley, S. (2015). #HackingTeam leaks: Ecuador is spending millions on malware, pro-government trolls. Global Voices Advocacy, August 4. https://advox.globalvoices.org/2015/08/04/hackingteam-leaks-ecuador-is-spending-millions-on-malware-pro-government-trolls/Google Scholar
Woolley, S. (2016). Automating power: Social bot interference in global politics. First Monday, 21(4). http://firstmonday.org/ojs/index.php/fm/article/view/6161Google Scholar
Woolley, S. (2018). Manufacturing consensus: Computational propaganda and the 2016 United States presidential election. Ph.D. dissertation, University of Washington.Google Scholar
Woolley, S., & Guilbeault, D. (2017). Computational propaganda in the United States of America: Manufacturing consensus online. Computational Propaganda Project Working Paper Series. Oxford: Oxford Internet Institute.Google Scholar
Woolley, S., & Howard, P. N. (2016a). Automation, algorithms, and politics| Political communication, computational propaganda, and autonomous agents – Introduction. International Journal of Communication, 10(2019), 4882–4890.Google Scholar
Woolley, S., & Howard, P. N. (2016b). Social media, revolution, and the rise of the political bot. In Robinson, P., Seib, P., & Frohlich, R. (Eds.), Routledge Handbook of Media, Conflict, and Security (pp. 282292). London: Taylor & Francis.Google Scholar
Woolley, S., & Howard, P. N. (2018). Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford: Oxford University Press.CrossRefGoogle Scholar
Woolley, S., Shorey, S., & Howard, P. (2018). The bot proxy: Designing automated self expression. In Papacharissi, Z. (Ed.), A Networked Self and Platforms, Stories, Connections (pp. 5976). London: Routledge.Google Scholar
Zannettou, S., Caulfield, T., Setzer, W., Sirivianos, M., Stringhini, G., & Blackburn, J. (2019). Who let the trolls out? Towards understanding state-sponsored trolls. In Proceedings of the 10th ACM Conference on Web Science (pp. 353362). Amsterdam: ACM. https://doi.org/10.1145/3292522.3326016Google Scholar
Zhang, Y., Wells, C., Wang, S., & Rohe, K. (2018). Attention and amplification in the hybrid media system: The composition and activity of Donald Trump’s Twitter following during the 2016 presidential election. New Media & Society, 20(9), 31613182. https://doi.org/10.1177/1461444817744390Google Scholar
Zhuang, L., Dunagan, J., Simon, D. R., Wang, H. J., Osipkov, I., & Tygar, J. D. (2008). Characterizing botnets from email spam records. LEET, 8(1), 19.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×