The talk focuses on the practicality of red-teaming AI systems and emphasizes the importance of understanding the threat model of the system being attacked.
- Most proposed attacks on AI systems focus on the algorithm rather than the system in which the algorithm is deployed.
- Red-teaming AI systems should focus on the data processing pipeline as the primary target.
- Understanding the threat model of the system being attacked is crucial.
- An anecdote is given to illustrate the difference between a sexy attack and a practical attack.
- The talk is a knowledge dump of practical wisdom gained from experience.
The speaker tells a story about two different attacks on Tesla's self-driving cars. One attack involved injecting an adversarial example to cause the car to drive into traffic, while the other involved pouring salt on the road to see how robust the Lane finding algorithm was. The latter attack was much easier to pull off, but the former was more sensational and received more media attention.