logo

Don't Red-Team AI Like a Chump

Conference:  Defcon 27

2019-08-01

Summary

The talk focuses on the practicality of red-teaming AI systems and emphasizes the importance of understanding the threat model of the system being attacked.
  • Most proposed attacks on AI systems focus on the algorithm rather than the system in which the algorithm is deployed.
  • Red-teaming AI systems should focus on the data processing pipeline as the primary target.
  • Understanding the threat model of the system being attacked is crucial.
  • An anecdote is given to illustrate the difference between a sexy attack and a practical attack.
  • The talk is a knowledge dump of practical wisdom gained from experience.
The speaker tells a story about two different attacks on Tesla's self-driving cars. One attack involved injecting an adversarial example to cause the car to drive into traffic, while the other involved pouring salt on the road to see how robust the Lane finding algorithm was. The latter attack was much easier to pull off, but the former was more sensational and received more media attention.

Abstract

AI needs no introduction as one of the most overhyped technical fields in the last decade. The subsequent hysteria around building AI-based systems has also made them a tasty target for folks looking to cause major mischief. However, most of the popular proposed attacks specifically targeting AI systems focus on the algorithm rather than the system in which the algorithm is deployed. We’ll begin by talking about why this threat model doesn’t hold up in realistic scenarios, using facial detection and self-driving cars as primary examples. We will also learn how to more effectively red-team AI systems by considering the data processing pipeline as the primary target.

Materials:

Tags: