Deepfake Social Engineering: Creating a Framework for Synthetic Media Social Engineering

Conference:  BlackHat USA 2021



The presentation discusses the potential threats posed by synthetic media in cybersecurity and the need for a holistic approach to combat them.
  • Synthetic media, such as deep fake videos, pose a serious threat to cybersecurity as they can be used to deceive individuals and automated systems.
  • A holistic approach that includes human-centric solutions, such as training and policy, is necessary to combat these threats.
  • The familiarity and interactivity of the synthetic media with the victim are important factors in determining the success of the attack.
  • The presentation also discusses the potential use of synthetic media in combating cybercrime, such as responding to gift card scammers.
  • The speaker is working on a project to investigate whether humans can detect deep fake videos and how familiarity influences their ability to do so.
The speaker mentions investigating cases where four million dollars have been transferred and emphasizes the importance of having a second person review the transfer before authorization. They also suggest using multi-channel verification policies to prevent attacks. The speaker also discusses the potential use of synthetic media in virtual kidnappings and social engineering in Zoom rooms.


How do you know that you are actually talking to the person you think you are talking to? Deepfake and related synthetic media technologies represent the greatest revolution in social engineering capabilities over the past century. In recent years, scammers have used synthetic audio in vishing attacks to impersonate executives to convince employees to wire funds to unauthorized accounts. In March 2021, the FBI warned the security community to expect a significant increase in synthetic media enabled scams over the next 18 months. The security community is at a highly dynamic moment in history in which the world is transitioning away from being able to trust what we experience with our own eyes and ears. This presentation proposes the Synthetic Media Social Engineering framework to describe these attacks and offers some easy to implement, human-centric countermeasures. The Synthetic Media Social Engineering framework encompasses five dimensions: Medium (text, audio, video, or a combination), Interactivity (pre-recorded, asynchronously, or Real-Time), Control (human puppeteer, software, or a hybrid), Familiarity (unfamiliar, familiar, close), and Intended Target (human or automation, an individual target, or a broader audience). While several technology-based methods to detect synthetic media currently exist, this work focuses discussion on human centered countermeasures to Synthetic Media Social Engineering attacks because most technology-based solutions are not readily available to the average user and are difficult to apply in real-time. Behavior-focused methods can teach users to spot inconsistencies between behaviors of the legitimate person and a Synthetic Media Social Engineering puppet. Proof-of-life statements will effectively counter most virtual kidnappings. Financial transfers should require either multi-factor authentication (MFA) or multi-person authorization. These 'old-school' solutions will find new life in the emerging world of Synthetic Media Social Engineering attacks and this presentation will help audience members to adapt to this new reality.