logo

Hermes Attack: Steal DNN Models In AI Privatization Deployment Scenarios

Conference:  BlackHat EU 2020

2020-12-09

Summary

The Hammers Attack is a new technique that allows an adversary to fully reconstruct the whole DNN model in AI privatization deployment scenarios by intercepting PCIe traffic and doing reverse-engineering to recover high-level semantics.
  • AI services are used everywhere and require high-quality DNN models that are expensive to gather and deploy commercially
  • Attackers may propose stealing information, but existing techniques can only reconstruct partial models
  • The Hammers Attack is the first technique that can fully steal DNN models with zero inference accuracy detections
  • The attack involves intercepting PCIe traffic, reverse-engineering to recover high-level semantics, and fully reconstructing the DNN model
  • The reverse-engineering step is challenging due to closed-source runtime, driver, and GPU instructions, as well as millions of noises due to level out-of-order traffic and unrelated control traffic
  • The attack technique can work on all existing GPUs and AI accelerators
  • Potential countermeasures to mitigate such attacks need to be developed
In AI privatization deployment scenarios, a company may have a private high-quality DNN model for live-face authentication and would like to sell this model to other companies with a license fee. However, the company allowing the use of the model has the motivation to protect its confidentiality, while the company with physical access to its own machines has the motivation to steal the DNN model to save the license fee for the coming years. The Hammers Attack identifies the PCIe bus connecting the host and the GPU/AI-accelerator as a new attack surface that allows an adversary to fully reconstruct the whole DNN model.

Abstract

The AI privatization deployment is becoming a big market in China and the US. For example, company A has a private high-quality DNN model for live-face authentication, and it would like to sell this DNN model to other companies with a license fee, e.g., million dollars per year. In this privatization deployment scenario, company A (the model owner) allows company B to use the DNN model but has the motivation to protect its confidentiality, while company B has the physical access to its own machines and also has the motivation to steal the DNN model to save the license fee for the coming years. Here company A usually protects its model with existing software-hardening and model-protection techniques on the host side, e.g., with secure boot, full disk encryption, runtime access control, and root privilege restrictions, etc. Luckily, all existing model extraction attacks are NOT able to reconstruct the whole DNN model. Thus, at this point, people still have the illusion that the model is safe (or at least the leakage is acceptable). However, in this talk, we identify that the PCIe bus connecting the host and the GPU/AI-accelerator is a new attack surface, which allows an adversary to FULLY reconstruct the WHOLE DNN model. We believe this attack technique can also work in other similar scenarios, such as a smartphone with NPU. This attack has three main steps:intercept PCIe traffic;do reverse-engineering to recover high-level semantics; andfully re-construct the DNN model.The 2nd reverse-engineering step is very challenging due to the closed-source runtime, driver and GPU instructions, as well as millions of noises due to level out-of-order traffic and unrelated control traffic.In the presentation, we will present the details of the attack steps and the algorithm of reconstructing the DNN model. We will also show three demos using 3 real-world GPUs, i.e., NVIDIA Geforce GT 730, NVIDIA GeforceRTX 1080 Ti, and NVIDIA Geforce RTX 2080 Ti, and 3 DNNs (i.e., MINIST, VGG, and Resnet). We believe our attack technique is able to work on all existing GPUs and AI accelerators. At last, we will discuss the potential countermeasures to mitigate such attacks. We hope that through our work, people could rethink the security of AI AI privatization deployment and harden the AI systems again from both software and hardware levels.

Materials:

Tags:

Post a comment