logo

Repurposing Neural Networks to Generate Synthetic Media for Information Operations

Conference:  BlackHat USA 2020

2020-08-05

Summary

The presentation discusses the risks and challenges of using synthetic media in information operations and how fine-tuning open source neural networks can be leveraged for customizable purposes.
  • GPT-2's training data set consisted of over 40 gigabytes of internet text data extracted from over 8 million reputable web pages based on high Reddit karma.
  • Fine-tuning GPT-2 on a new language modeling task using additional training data consisting of open source social media posts from accounts operated by Russia's famed Internet Research Agency (IRA) can make its output text appear more like social media posts with their shorter length, informal grammar, erratic punctuation, and syntactic works like hashtags and emojis.
  • Synthetic media can be weaponized for offensive social media-driven information operations, making detection, attribution, and response challenging.
  • Companies like LiarBird restrict the sale of model output to ensure the legal right to use the voice that they're trying to fabricate.
  • There have been no liability cases for using AI images that actually look like a real person.
One prominent campaign tactic involves fabricating journalist personas and reaching out to real-world experts and political figures to disingenuously solicit audio and video interviews that advance an Iranian political agenda.

Abstract

Deep neural networks routinely achieve near human-level performances on a variety of tasks, but each new breakthrough demands massive volumes of quality data, access to expensive GPU clusters, and weeks or even months to train from scratch. AI researchers commonly release model checkpoints to avoid the wasteful duplication of these costly training runs, since fine-tuning pre-trained neural networks for custom tasks requires less data, time, and money compared to training them from scratch. While this emerging model sharing ecosystem beneficially lowers the barrier to entry for non-experts, it also gives a leg up to those seeking to leverage open source models for malicious purposes. Using open source pre-trained natural language processing, computer vision, and speech recognition neural networks, we demonstrate the relative ease with which fine tuning in the text, image, and audio domains can be adopted for generative impersonation. We quantify the effort involved in generating credible synthetic media, along with the challenges that time- and resource-limited investigators face in detecting generations produced by fine-tuned models. We wargame out these capabilities in the context of social media-driven information operations, and assess the challenges underlying detection, attribution, and response in scenarios where actors can anonymously generate and distribute credible fake content. Our resulting analysis suggests meaningful paths forward for a future where synthetically generated media increasingly looks, speaks, and writes like us.

Materials:

Tags: