Sort by:  

Conference:  Defcon 31
Authors: Sven Cattell Founder nbhd.ai & AI Village, Rumman Chowdhury Founder Humane Intelligence, Austin Carson Founder SeedAI

We’re running the largest live AI hacking event ever in the AI village this year. Anthropic, Google, HuggingFace, Meta, NVIDIA, OpenAI, and Stability, have all provided models to attack and Scale AI have built the platform. This event is orders of magnitude bigger than any previous AI red team effort. There are observers from the White House, NIST, NSF, and the EU coming to learn from hackers. We built this event to grow the community that knows how to effectively evaluate Large Language Models as it is much more than prompt injections and jailbreaks. AI works fundamentally differently to traditional software and only forms a part of a product. Trust and Security of AI in a system thus has to work fundamentally differently to traditional software. This is especially true for generative AI systems. The core difference is AI is a stochastic component of software and is allowed to make a small amount of mistakes. This changes bug hunting, reporting, and payouts. Come to this talk to hear about how and why we organized this, and the history of algorithmic & bias bounties that led up to the largest one ever at DEF CON 31. We’ll also give you some tips to help you in the contest.
Conference:  Transform X 2022
Authors: Susan Zhang, Faisal Siddiqi, Bryan Catanzaro, Erhan Bas, Elliot Branson

Join this enterprise-focused, spirited discussion on how best to train, use, and fine-tune foundation models in the enterprise. Elliot Branson, Director of Machine Learning & Engineering, Scale AI, will moderate the panel with industry experts from AWS, NVIDIA, Netflix, and Meta.Erhan Bas, formerly Applied Scientist at Amazon Web Services and now at Scale, shares his perspective on training large language models (LLMs). Bryan Catanzaro, Vice President of Applied Deep Learning Research at NVIDIA, shares how the GPU manufacturer is targeting foundation models as a core workflow for enterprise customers. Faisal Siddiqi, Director of Machine Learning Platform at Netflix, will share how his company is using foundation models to analyze highly produced video content. Susan Zhang, Researcher at Facebook AI Research (FAIR), a division of Meta, will share insights from training and fine-tuning Meta’s OPT model.Members of the panel will share how they scale their training across multiple nodes, attempt to avoid overfitting by mitigating data quality issues early on, and address bias in models trained on a large internet-based text corpus. The panelists will discuss the compute cost inherent in training an LLM from scratch, how to avoid costly and tedious hyperparameter optimization, the need to mitigate training failure risk in clusters with thousands of GPUs, including sticking to synchronous gradient descent, and the need for extremely fast storage devices to save and load training checkpoints.