logo

AI Gone Rogue: Exterminating Deep Fakes Before They Cause Menace

Conference:  BlackHat EU 2018

2018-12-06

Summary

The presentation discusses the dangers of deep fake technology and proposes a solution using deep learning to identify complex deep fake videos.
  • Deep fake technology is an AI-based human image blending method used to create fake videos that can cause chaos and bring economic and emotional damages to one's reputation.
  • Videos targeted on politicos in the form of cyber propaganda can prove to be catastrophic to a country's government.
  • The proposed solution is to identify complex deep fake videos using deep learning by training a pre-trained Facenet model on image data of people of importance or concern.
  • After training, the output of the final layer will be stored in a database and compared to the output of the final layer from the neural network to confirm the authenticity of the video.
  • Defensive measures against deep fake technology are also discussed.
The presentation warns about the potential dangers of deep fake technology, such as the creation of fake videos of politicians speaking Italian or passing off comments just before the voting day, which can alter the outcome of elections and create tensions within the country and on a global level. The presentation proposes a solution using deep learning to identify complex deep fake videos and prevent such scenarios from happening.

Abstract

The face: A crucial means of identity. But what if this crucial means of identity is stolen from you? Yes, this is happening and is termed as 'Deep fake.' Deep fake technology is an artificial intelligence based human image blending method used in different ways such as to create revenge porn, fake celebrity pornographic videos, or even in cyber propaganda. Videos are altered using General Adversarial networks in which the face of the speaker is manipulated by a network by tailoring it to someone else's face. These videos can sometimes be identified as fake by human eye; however, as neural networks get rigorously trained on more resources, it will become difficult to identify fake videos. Such videos can cause chaos and bring economical and emotional damages to one's reputation. Videos targeted on politico in form of cyber propaganda can prove to be catastrophic to a country's government.We will discuss about the many tentacles of Deep fake and dreadful damages it can cause. But most importantly, this talk will provide a demo of the proposed solution: to identify complex Deep fake videos using deep learning. This can be achieved using a pre-trained Facenet model. The model can be trained on image data of people of importance or concern. After training, the output of the final layer will be stored in a database. A set of sampled images from a video will be passed through the neural network and the output of the final layer from the neural network will be compared to values stored in the database. The mean squared difference would confirm the authenticity of the video.In 2018, we believe that Deep fake will progress to a different level. We will also talk about defensive measures against Deep fake.

Materials:

Tags:

Post a comment

Related work


Conference:  BlackHat USA 2019
Authors:
2019-08-07


Conference:  Defcon 31
Authors: Katitza Rodriguez Policy Director for Global Privacy Electronic Frontier Foundation, Bill Budington Senior Staff Technologist Electronic Frontier Foundation
2023-08-01


Conference:  Defcon 31
Authors: Gal Zror Vulnerability Research Manager at CyberArk Labs
2023-08-01