logo

A Hacker's Guide to Reducing Side-Channel Attack Surfaces Using Deep-Learning

Conference:  BlackHat USA 2020

2020-08-06

Summary

The presentation discusses how to use deep learning and dynamic analysis to reduce side-channel attack surfaces in hardware cryptography. The approach involves leveraging AI explainability to quickly assess which parts of the implementation are responsible for the information leakages.
  • Side-channel attacks target the implementation of an algorithm rather than the algorithm itself, making it a very efficient way to attack secure hardware.
  • Debugging hardware is harder than debugging software because it requires looking at both the software and hardware to find the source of the leakages.
  • The presentation showcases a concrete step-by-step example of how to use a software called COLD (Charge Side-Channel Attack Leak Detector) to find where a tiny implementation running on an SMT32F4 is leaking.
  • The approach involves using deep learning and dynamic analysis to quickly and efficiently find the origin of the leakage.
  • The presentation also discusses the benefits of using AI explainability to map the code and pinpoint the part of the code that is interacting with the hardware in a way that makes it vulnerable to side-channel attacks.
  • The ultimate goal is to create a debugger that will help reduce the cost of finding and pinpointing attacks accurately, freeing up more time to focus on developing stronger cryptography.
The presentation cites an example of a powerful side-channel attack that occurred a few years back, where people were able to lift out the private key of Bitcoin wallets from a hardware Bitcoin wallet called Trezor. This attack highlights the devastating impact that side-channel attacks can have on secure hardware.

Abstract

In recent years, deep-learning based side-channel attacks have been proven to be very effective and opened the door to automated implementation techniques. Building on this line of work, this talk explores how to take the approach a step further and showcases how to leverage the recent advance in AI explainability to quickly assess which parts of the implementation is responsible for the information. Through a concrete set by step example, we will showcase the promise of this approach, its limitations, and how it can be used today.

Materials:

Tags: