Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-26T03:17:18.899Z Has data issue: false hasContentIssue false

Why Bother Using Bots?

Published online by Cambridge University Press:  02 June 2022

LEAH COSTIK*
Affiliation:
UNIVERSITY OF MINNESOTA
Rights & Permissions [Opens in a new window]

Extract

As the Russian invasion of Ukraine continues to unfold, social media platforms are clamping down on Russian state-owned media, a key lever the Kremlin uses to spread propaganda and disinformation. The survival of non-democratic regimes in part depends on their ability to manage the information environment in this way. Social media has become a key ingredient in autocrats’ toolkits of how to respond to online opposition, which includes the use of trolls and automated bot accounts. But what are bots? What work do they do? And how might this social media tool be used in authoritarian regimes to help such regimes? In their new article, authors Stukal, Sanovich, Bonneau, and Tucker explore these pressing questions through an investigation of the use of pro-government Twitter bots within Russia during times of both offline and online political protests.

Type
Russia and Ukraine
Copyright
© American Political Science Association 2022

As the Russian invasion of Ukraine continues to unfold, social media platforms are clamping down on Russian state-owned media, a key lever the Kremlin uses to spread propaganda and disinformation. The survival of non-democratic regimes in part depends on their ability to manage the information environment in this way. Social media has become a key ingredient in autocrats’ toolkits of how to respond to online opposition, which includes the use of trolls and automated bot accounts. But what are bots? What work do they do? And how might this social media tool be used in authoritarian regimes to help such regimes? In their new article, authors Stukal, Sanovich, Bonneau, and Tucker explore these pressing questions through an investigation of the use of pro-government Twitter bots within Russia during times of both offline and online political protests.

While current research explores the use of human trolls by authoritarian regimes, much less work exists that examines bots, or algorithmically controlled social media accounts. Stukal et al. argue that bots offer a number of benefits over other “digital information manipulation tools.” Bots are inexpensive, difficult to trace, can be deployed in large numbers, do not require human intervention, and can run online for indefinite periods of time. The authors focus specifically on Twitter bots, algorithmically controlled accounts that can automatically perform many actions like that of a normal (human) user, including posting, retweeting, responding, and liking posts, all without the intervention of a human.

Authoritarian regimes can use Twitter bots for a variety of reasons: bots may be used to show support for controversial governmental programs or candidates hoping for reelection; regional governors are encouraged by the Kremlin to use social media, but public employees “often lack the necessary skills for effective social media communication and rely on bots to artificially inflate relevant activity indicators”; and non-government actors, such as businessmen, may also use bots to signal support for politicians in hopes of getting perks or pay offs. In their article, Stukal et al. remain agnostic about the reasons people may use Twitter bots and assume only that both government agencies and non-governmental actors alike use Twitter bots to maximize the benefits they offer.

The authors theorize that in a competitive authoritarian environment, Twitter bots could be used in an attempt to alter the cost-benefit analysis of participating in opposition movements, either online or offline. The authors use two theoretical frameworks. First, they theorize that Twitter bots could be used “to reduce participation in offline protests.” Second, they theorize that Twitter bots could be used to “control the online agenda… and will be mobilized in response to opposition online activity.” Twitter bots may use the same tactics to achieve these different goals. From these theoretical frameworks, the authors derive four strategies Twitter bots might use.

The first strategy available to Twitter bots includes de-emphasizing a protest-related agenda by increasing the frequency with which they post content (“volume amplification”). Similarly, the second strategy is to distract social media users by increasing their retweeting of diverse accounts (“retweet diversity”). The third strategy involves decreasing the opposition supporters’ expected benefits by tweeting pro-government posts about Vladimir Putin. The logic behind this “cheerleading” is to make Putin appear more popular, which may make potential protesters think the likelihood of their protest succeeding is lower. A fourth and final strategy available to Twitter bots involves “increasing the expected costs of supporting opposition” through trolling and harassment. This strategy includes “negative campaigning,” measured by the number of tweets pro-government Twitter bots’ produce that mention Alexey Navalny, a charismatic and prominent Russian opposition leader.

The authors use machine learning to detect bots on Russian political Twitter and find 1,516 pro-government Twitter bots with over one million tweets. The authors then identify both offline protests and online opposition activity. Offline protests were identified in a three-step process including use of the Integrated Early Crisis Warning System, a project that “automatically extracts information from news articles” to generate a list of offline protests, a manual search for mentions of protests in both English and Russian-language mass media, cross-checking their data against three other protest datasets. Stukal et al. identify online opposition activity through spikes, or “a day with at least five times as many tweets from opposition accounts as they posted on a median day a month before and after that day,” within the tweets of 15 activists, independent journalists, or mass media outlets that report favorably and extensively on Russian opposition.

To measure the effect that spikes in opposition online activity and offline protests have on Twitter bot strategies, Stukal et al. use various statistical analyses. They found that while their hypotheses regarding the negative campaigning strategy of Twitter bots was rejected and mixed results were found regarding “cheerleading,” their hypotheses regarding the volume and retweet diversity dimensions were confirmed. In other words, Twitter bots do increase their activity, as well as retweet a lot on diverse topics, in an attempt to deemphasize a protest-related agenda. Intriguingly, the authors find that bots are used more often in response to online as opposed to offline protests.

Stukal, Sanovich, Bonneau, and Tucker’s research offers several valuable contributions in answering questions related to the use of social media in competitive authoritarian contexts. First, they bridge the gap between diverse bodies of scholarship, including computer science research on bot detection and political science research on authoritarian politics. Second, they develop testable hypotheses about the ways in which Twitter bots may be employed to “counter domestic opposition activity either online or offline.” Third, Stukal et al. demonstrate that some previous research on human trolls does not carry over for bots, especially Twitter bots. Most critically, the authors contribute to and advance research on the tools non-democratic regimes have at their disposal to undermine opposition. ■

Stukal, Denis, Sergey Sanovich, Richard Bonneau, and Joshua A. Tucker. 2022. “Why Botter: How Pro-Government Bots Fight Opposition in Russia." https://doi.org/10.1017/S0003055421001507. American Political Science Review, 1–15.