logo

Practical Defenses Against Adversarial Machine Learning

Conference:  BlackHat USA 2020

2020-08-05

Summary

The presentation discusses the challenges of demonstrating mathematically novel attacks on industry infrastructure and the importance of considering bad inputs and model leakage. It also highlights the limitations of gradient-based attacks and the potential of quick and dirty attacks.
  • Demonstrating mathematically novel attacks on industry infrastructure is challenging due to the additional layer of infrastructure and artificial constraints in the literature.
  • Bad inputs and model leakage are two types of exploits that should be considered.
  • Quick and dirty attacks are clever and do not require specific knowledge of algorithms or models.
  • Gradient-based attacks constitute a much smaller percentage compared to other less mathy and quick and dirty attacks.
  • Anecdotes include self-driving cars interpreting physical artifacts as part of the environment and malware classifiers being fooled by appending carefully selected strings to a malicious file.
  • Deep fakes are more susceptible to gradient-based attacks, but larger models may be more robust to model leakage.
An example of bad inputs is with self-driving cars. Last year, the presenter had a talk at Defcon where they took some salt and poured it on the ground because it was cold and drove their friend's Tesla. Lo and behold, the Tesla started driving off into the sunset when it should not have been doing that. Another example is with malware classifiers, where appending carefully selected strings to a malicious file can fool the classifier into marking that file as benign.

Abstract

Adversarial machine learning has hit the spotlight as a topic relevant to practically-minded security teams, but noise and hype have diluted the discourse to gradient-based comparisons of blueberry muffins and chihuahuas. This fails to reflect the attack landscape, making it difficult to adequately assess the risks. More concerning still, recommendations for mitigations are similarly lacking in their calibration to real threats. This talk discusses research conducted over the past year on real-world attacks against machine learning systems which include recommendation engines, algorithmic trading platforms, email filtering - in addition to the classic examples of facial recognition and malware classification. We'll begin by discussing the difference between academic and deployment attack environments before diving into real-world attack examples. Most importantly, the bulk of the session will detail practical defensive measures.

Materials:

Tags:

Post a comment