logo

Attacking and protecting Artificial Intelligence

2023-02-15

Authors:   Rob Van der Veer


Summary

The presentation discusses the importance of treating AI systems as professional software and applying traditional software development life cycle approaches to ensure security and privacy. It provides 10 commandments for AI security and privacy, covering AI life cycle, model attacks, and protection.
  • AI systems should be treated as professional software and maintained using traditional software development life cycle approaches
  • 10 commandments for AI security and privacy include involving AI applications and data scientists in existing software security programs, documenting experimentation, implementing unit testing, and protecting source data and development
  • Model attacks can be carried out through data poisoning, adversarial examples, and model inversion, and can be prevented through techniques such as data sanitization and model robustness
  • Protection measures for AI systems include secure storage and access control for source data, encryption, and versioning
The speaker mentions a case at Uber where they had to deal with issues of responsible AI that discriminated against drivers and clients. This highlights the importance of considering purpose elements and avoiding discrimination when using algorithms in AI systems.

Abstract

Is AI our doom or our savior? How can AI systems attack? How can they be attacked? How do we build security and privacy into them? In this session we will go through what makes AI systems so special by discussing several actual AI disasters and by reviewing the key principles behind the European AI act and the new US AI Bill of rights. The material presented is based on 30 years of experience with AI software engineering and extensive research that served as input for the new ISO/IEC 5338 standard on AI lifecycle and the upcoming AI security OWASP project.

Materials:

Post a comment

Related work

Authors: Rob van der Veer, Spyros Gasteratos
2021-09-24