Article contents
All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation
Published online by Cambridge University Press: 20 November 2020
Abstract
Online misinformation has become a constant; only the way actors create and distribute that information is changing. Advances in artificial intelligence (AI) such as GPT-2 mean that actors can now synthetically generate text in ways that mimic the style and substance of human-created news stories. We carried out three original experiments to study whether these AI-generated texts are credible and can influence opinions on foreign policy. The first evaluated human perceptions of AI-generated text relative to an original story. The second investigated the interaction between partisanship and AI-generated news. The third examined the distributions of perceived credibility across different AI model sizes. We find that individuals are largely incapable of distinguishing between AI- and human-generated text; partisanship affects the perceived credibility of the story; and exposure to the text does little to change individuals’ policy views. The findings have important implications in understanding AI in online misinformation campaigns.
- Type
- Research Article
- Information
- Copyright
- © The Author(s), 2020. Published by Cambridge University Press on behalf of The Experimental Research Section of the American Political Science Association
Footnotes
The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: doi:10.7910/DVN/1XVYU3. This research was conducted using Sarah Kreps’ personal research funds. Early access to GPT-2 was provided in-kind by OpenAI under a non-disclosure agreement. Sarah Kreps and Miles McCain otherwise have no relationships with interested parties. Miles Brundage is employed by OpenAI.
References
- 96
- Cited by