The presentation discusses the combination of different machine learning attacks to become more powerful and the need for understanding the mathematical background of these attacks to develop stronger defenses.
- Different machine learning attacks can be combined to become even more powerful
- The mathematical background of these attacks needs to be understood to develop stronger defenses
- The presenter's goal is to help people better understand the security of AI
- The presenter is interested in extracting the model from AI chips and accelerators to make them more secure
- The presenter encourages people to look into leveraging adverse examples for protection and watermarking/fingerprinting
The presenter discusses the need for understanding the mathematical background of machine learning attacks, as they are currently being used as a black box. The presenter hopes to develop stronger defenses and avoid the same old route of developing insecure systems and then hacking them to fix the problems. The presenter also mentions the need to extract the model from AI chips and accelerators to make them more secure.
Deep Neural Networks (DNN) have been widely deployed for a variety of tasks across many disciplines, for example, image processing, natural language processing, and voice recognition. However, creating a successful DNN model depends on the availability of huge amounts of data as well as enormous computing power, and the model training is often an arduously slow process. This presents a large barrier to those interested in utilizing a DNN. To meet the demands of users who may not have sufficient resources, cloud-based deep learning services arose as a cost-effective and flexible solution allowing users to complete their machine learning (ML) tasks efficiently. Machine Learning as a Service (MLaaS) platform providers may spend great effort collecting data and training models, and thus want to keep them proprietary. The DNN models of MLaaS platforms can only be used as web-based API interface and thus is isolated from users. In this work, we develop a novel type of attack that allows the adversary to easily extract the large-scale DNN models from various cloud-based MLaaS platforms, which are hosted by Microsoft, Face++, IBM, Google and Clarifai.