Adversaries of AI applications continuously advance their tools for evasion, poisoning, extraction, and inference against the underlying machine learning models to achieve their malicious goals. This technical talk will introduce the actual open source tools to reproduce these attacks and the tools needed to defend and evaluate applications before deploying and exposing them to adversaries.