logo

Zen and the Art of Adversarial Machine Learning

Conference:  BlackHat USA 2021

2021-11-11

Summary

The presentation discusses various attacks on machine learning systems and emphasizes the importance of understanding algorithms and optimization methods to secure them.
  • There are two main stages at which an adversary can try to attack a machine learning system: before deployment through poisoning attacks and after deployment through various objectives
  • Extraction is the most foundational attack primitive, which involves stealing a dataset, labeling it, and training a model to create a functional equivalent
  • It is important to separate production attacks from data collection efforts and use separate infrastructure for them
  • Understanding algorithms and optimization methods is crucial for securing machine learning systems
  • Academic researchers are excited to share their research and can accelerate learning
  • Security measures that apply to regular models can transfer to machine learning models
The presenter mentions that their job revolves around attacking machine learning models, and many attacks are simple but have their roots in a strong foundation. They encourage attendees to start with the basics and not be deterred from trying these things, as someone has to secure the models if organizations are going to implement them.

Abstract

Machine learning has so far been relatively unchecked on its way to world domination. As the high pace of ML research continues, ML is being integrated into all manner of business processes – chatbots, sales lead generation, maintenance decisions, policing, medicine, recommendations... However, there are several security concerns that have been unaccounted for which has led to some less than desirable outcomes. Researchers have been able to extract PII from language models, red teamers have stolen (and then bypassed) spam and malware classification models, citizens have been incorrectly identified as criminals, otherwise qualified home buyers have been denied mortgages. This is just scratching the surface. While attacks on AI systems are talked about as futuristic, the consequences of not securing them are already being experienced. This talk will discuss the current state of ML security, the symmetry found in adversarial ML, and how offensive security professionals can approach the topic. We will provide a compendium of attacks, and cover the fundamentals of attacking ML such as: - Where to find models to attack, what should you be looking for? - Given all available options, what should you do? - What is needed for a successful attack? - Will the attack take months or minutes, is it worth it?Offensive teams might not have as many papers published or as many PhDs among their ranks, but they have data, domain knowledge, and the right mindset to challenge AI systems in real-world environments. This talk aims to be a defining resource for offensive security professionals looking to expand their skillsets.

Materials:

Tags: