logo

Adversarial Machine Learning - Learn Why and How to Break AI!

Conference:  RSA Conference 2022

2022-06-06

Abstract

Adversaries continuously transform their tactics and tools to deceive or break applications based on AI. With increasing numbers of AI deployed in security critical applications their focus is on evading and extracting the underlying ML to achieve their malicious goals. This hands-on Learning Lab will be a unique opportunity to learn why we should break our own AI, and more importantly how to break it! This session will follow Chatham House Rule to allow for free exchange of information and learning. We look forward to participants actively engaging in the discussion and remind attendees that no comment attribution or recording of any sort should take place. This is a capacity-controlled session. If added to your schedule and your availability changes, please remove this session from your schedule to allow others to participate. The facilitator requires that attendees bring their own laptops to the session. If you would like to run the code examples of the lab, you will need to install Python>=3.7 and Python packages torch>=1.10, adversarial-robustness-toolbox>=1.10, matplotlib, and notebook (to run Jupyter notebooks). Furthermore, the ability to clone a GitHub repo is recommended.   The packages can be installed with a package installer program called pip that comes with Python, the following commands should do the installations: • pip install torch • pip install adversarial-robustness-toolbox • pip install matplotlib • pip install notebook

Materials:

Tags: