logo

AIModel-Mutator: Finding Vulnerabilities in TensorFlow

Conference:  BlackHat USA 2021

2021-11-10

Summary

The presentation discusses the use of AIModel-Mutator to find vulnerabilities in TensorFlow and other machine learning frameworks.
  • Machine learning frameworks, like TensorFlow, can contain vulnerabilities that pose security risks.
  • API fuzzing is not enough to find deep vulnerabilities hidden in complicated code logic.
  • AIModel-Mutator is a new fuzzing approach that can help find deep vulnerabilities without manual context construction.
  • The tool was evaluated in the TensorFlow framework and found at least 4 deep vulnerabilities that could not be found by API fuzzing.
  • One attack was constructed based on the findings and successfully crashed the TensorFlow framework.
  • Structure aware model mutation is used to load the model from the saving model files and conduct mutation on the graph.
  • The presentation includes an anecdote about how finding AI vulnerabilities is important because any tiny bugs inside an AI framework can be dangerous.
The presentation highlights the importance of finding AI vulnerabilities because any tiny bugs inside an AI framework can be dangerous. If a third-party model has not been properly vetted, it could exploit vulnerabilities inside the AI framework to launch an attack, putting all upstream services in danger. This underscores the need for tools like AIModel-Mutator to help find deep vulnerabilities that API fuzzing cannot detect.

Abstract

Like other software, machine learning frameworks could contain vulnerabilities. Various advanced machine learning frameworks, such as Tensorflow, Pytorch, PaddlePaddle, etc. keep in active development to catch up with increasing demand. The rapid development style brings security risks along with their benefits. For example, from 2019 to 2021, the number of CVEs for Tensorflow increased 15 times. API fuzzing is a common way for vulnerability detection, but it is not enough in machine learning frameworks. In this work, we found that API fuzzing cannot find "deep" vulnerabilities hidden in complicated code logic. These vulnerabilities have to be triggered under a certain semantic context. It is hard for API fuzzing to construct such semantic context from scratch. In this session, we will demonstrate a new fuzzing approach for machine learning frameworks. It can help to find deep vulnerabilities without manual context construction. We evaluated our tool AIModel-mutator in the TensorFlow framework and found at least 4 deep vulnerabilities in the TensorFlow framework, which cannot be found by API fuzzing. Two of them are found at the model inference stage. An attack can easily craft an input to defeat a model and crash the system. We also constructed one attack based on our findings, and it can successfully crash the TensorFlow framework. Our findings have been confirmed by the Google TensorFlow security team.

Materials:

Tags: