logo

Disinformation At Scale: Using GPT-3 Maliciously for Information Operations

Conference:  BlackHat USA 2021

2021-08-04

Summary

The presentation discusses the capabilities and potential risks of GPT-3, a large language model trained on open internet data, in generating deceptive, biased, or abusive language at scale. The tool can be used to spin news stories and persuade people to change their stance on major issues.
  • GPT-3 is a language model trained on open internet data that can generate deceptive, biased, or abusive language at scale
  • The tool can be used to spin news stories and persuade people to change their stance on major issues
  • The quality of data used to train GPT-3 may have to be sacrificed, as it requires a lot of data to train such a large model
  • GPT-3 can generate outputs that are compelling and almost human-like, but not always authentic
  • The presentation includes a demo tool called Twatter, which is a GPT-3 only social media site that can generate tweets with a selected temperature and number of tweets
  • The presentation also includes a survey that assesses GPT-3's persuasive power in changing people's stance on major issues
  • The survey results show that GPT-3 can generate persuasive arguments for and against major issues, but the authenticity and effectiveness of the arguments vary depending on the issue and the target audience
The presentation includes an anecdote about GPT-3's ability to spin news stories. When given an AP article about the chaos at the Capitol on January 6th, GPT-3 was able to rewrite it in a strongly pro-Trump way, suggesting that the tool can be used to seed fake news stories and support a chosen narrative.

Abstract

Last year, OpenAI developed GPT-3—currently the largest and most powerful natural language model in the world. The select groups that were granted first access quickly demonstrated that it can write realistic text from almost any genre—including articles that humans couldn’t distinguish from real news stories. In the wrong hands, this tool can tear at the fabric of society and bring disinformation operations to an entirely new scale.Based on six months of privileged access to GPT-3, our research tries to answer just how useful GPT-3 can be for information operators looking to spread lies and deceit. Can GPT-3 be used to amplify disinformation narratives? Can it come up with explosive news stories on its own? Can it create text that might fuel the next QAnon? Can it really change people’s stances on world affairs? We will show how we got GPT-3 to do all this and discuss ways to prepare for the next wave of automated disinformation.

Materials:

Tags: