logo
Dates

Author


Conferences

Tags

Sort by:  

Conference:  Defcon 31
Authors: Austin Carson Founder & President of SeedAI, Dr. Arati Prabhakar Director of the White House Office of Science and Technology Policy (OSTP) and Assistant to the President for Science and Technology
2023-08-01

On May 4th, the White House announced the AI Village at DEF CON's Generative AI Red Team and their participation, followed by announcements from the House and Senate AI Caucus leadership and the National Science Foundation. In this panel, we'll hear from top officials and executives about how they're balancing the explosion of creativity and entrepreneurship from the advent of GenAI with the known & unknown risks of deployment at scale. We'll also hear how this exercise is viewed as a model for enhancing trust & safety through democratizing AI education. Panelists will also discuss why it's meaningful to bring together thousands of people from different communities to conduct the exercise across the available AI models.
Conference:  Defcon 31
Authors: Dennis Giese
2023-08-01

Exactly 5 years ago we were presenting ways to hack and root vacuum robots. Since then, many things have changed. Back then we were looking into ways to use the robots' "dumb" sensors to spy on the user (e.g. by using the ultrasonic sensor). But all our predictions were exceeded by the reality: today's robots bring multiple cameras and microphones with them. AI is used to detect objects and rooms. But can it be trusted? Where will pictures of your cat end up? In this talk we will look at the security and privacy of current devices. We will show that their flaws pose a huge privacy risk and that certification of devices cannot be trusted. Not to worry, though - we will also show you how to protect yourself (and your data) from your robot friends. You will learn on how you can get root access to current flagship models of 4 different vendors. Come with us on a journey of having fun hacking interesting devices while preventing them from breaching your privacy. We will also discuss the risks of used devices, for both old and new users. Finally, we will talk about the challenges of documenting vacuum robots and developing custom software for them. While our Primary goal is to disconnect the robots from the cloud, it is also for users to repair their devices - pwning to own in a wholesome way.
Conference:  Defcon 31
Authors: Moderator: Perri Adams DARPA AIxCC Program Manager, Michael Sellitto Head of Geopolitics and Security Policy, Anthropic, Heather Adkins Vice President of Security Engineering, Google, Vijay Bolina Chief Information Security Officer & Head of Cybersecurity Research, Google DeepMind, Dave Weston Vice President of Enterprise and OS Security, Microsoft, Matthew Knight Head of Security, OpenAI, Omkhar Arasaratnam General Manager, Open Source Security Foundation (OpenSSF)
2023-08-01

DARPA’s AI Cyber Challenge program manager, Perri Adams, is joined by collaborators from Anthropic, Google, Google DeepMind, OpenAI and the Open Source Security Foundation to share insights about the upcoming competition and discuss the software security challenges facing the commercial sector and open-source community.
Conference:  Defcon 31
Authors: Dr. Craig Martell Chief Digital and AI Officer at the Department of Defense
2023-08-01

In 1979, NORAD was duped by a simulation that caused NORAD (North American Aerospace Defense) to believe a full-scale Soviet nuclear attack was underway. This only legitimized the plot in the 1983 classic, War Games, of the possibility of a computer making unstoppable, life-altering decisions. On the 40th anniversary of the movie that predicted the potential role of AI in military systems, LLMs have become a sensation and increasingly, synonymous with AI. This is a dangerous detour in AI’s development, one that humankind can’t afford to take. Join Dr. Martell for an off-the-cuff discussion on what’s at stake as the Department of Defense presses forward to balance agility with accountability and the role hackers play in ensuring the responsible and secure use of AI from the boardroom to the battlefield.
Authors: Dr. Magda Chelly
2023-02-16

tldr - powered by Generative AI

The presentation discusses the potential risks and benefits of using AI-generated code in software development, with a focus on cybersecurity and DevOps. The speaker emphasizes the importance of balancing speed and efficiency with quality and security, and highlights the need for clear contracts and due diligence when working with third-party AI tools and data sets.
  • AI-generated code can increase productivity and reduce errors, but may also pose significant risks to businesses and users if not properly regulated and tested.
  • Clear contracts and due diligence are necessary when working with third-party AI tools and data sets to ensure quality and security.
  • The use of AI in software development requires a balance between speed and efficiency and quality and security.
  • The speaker suggests that AI-assisted coding may be a more effective approach than relying solely on AI-generated code.
  • The presentation also touches on the broader issues of data privacy and intellectual property rights in the context of AI and big data.
Authors: Rob Van der Veer
2023-02-15

tldr - powered by Generative AI

The presentation discusses the importance of treating AI systems as professional software and applying traditional software development life cycle approaches to ensure security and privacy. It provides 10 commandments for AI security and privacy, covering AI life cycle, model attacks, and protection.
  • AI systems should be treated as professional software and maintained using traditional software development life cycle approaches
  • 10 commandments for AI security and privacy include involving AI applications and data scientists in existing software security programs, documenting experimentation, implementing unit testing, and protecting source data and development
  • Model attacks can be carried out through data poisoning, adversarial examples, and model inversion, and can be prevented through techniques such as data sanitization and model robustness
  • Protection measures for AI systems include secure storage and access control for source data, encryption, and versioning
Authors: Aaron Ansari
2021-09-24

tldr - powered by Generative AI

The importance of secure design, logging and monitoring, and broken authentication in AI technology
  • API breaches can lead to manipulation of AI engines, which can be dangerous in financial and loan decisions
  • Insecure design and injection can compromise AI engines
  • Broken authentication can lead to unauthorized access and manipulation of AI engines
  • Logging and monitoring are crucial for compliance and abuse prevention
  • Human involvement and fine-tuning are necessary for effective decision-making in AI technology