logo

Smashing the ML Stack for Fun and Lawsuits

Conference:  BlackHat USA 2021

2021-08-04

Summary

Adversarial machine learning research poses novel legal risks for researchers due to potential violations of contracts, computer fraud and abuse act, copyright infringement, anti-circumvention law, and misappropriation of trade secrets.
  • Contracts often prohibit reverse engineering, which many adversarial machine learning attacks specifically model inversion and model stealing do
  • Adversarial machine learning attacks include evasion, poisoning, model inversion, and model stealing
  • Most defenses against adversarial machine learning attacks are currently broken
  • Legal risks for adversarial machine learning researchers include breach of contract, computer fraud and abuse act, copyright infringement, anti-circumvention law, and misappropriation of trade secrets
The Chinese government appended random characters at the end of tweets to confuse Twitter's algorithm for flagging spam. Researchers from Skylight confused Silence's machine malware evasion system by appending benign code into WannaCry ransomware. Microsoft's bot, designed to emulate a sweet 16-year-old personality, was corrupted by trolls from Reddit and 4chan, leading to the bot becoming bigoted. OpenAI's model, GPT2, was replicated by researchers using open-source information and proxy models.

Abstract

Adversarial machine learning research is booming. ML researchers are increasingly targeting commercial ML systems such as those used by Facebook, Tesla, Microsoft, IBM, or Google to demonstrate vulnerabilities. But what legal risks are researchers running? Does the law map onto expectations that vendors might have about how their systems should be used?In this talk, we analyze the legal risks of testing the security of commercially deployed ML systems. Studying or testing the security of any operational system potentially runs afoul of the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. Previously, our team analyzed common adversarial attacks under the United States law, summarizing the ways in which variability in legal regimes created uncertainty for researchers and for companies that might be interested in understanding the legal rules that apply to certain kinds of attacks.Because the United States Supreme Court has, for the first time, taken up the scope of the authorization provisions from the Computer Fraud and Abuse Act in Van Buren v. United States, we will be able to provide more definitive answers as to the legal risks that adversarial machine learning researchers may take when performing attacks such as model inversion, membership inference, and poisoning attacks. We will also look at whether other legal regimes, such as copyright or contract law, map on more directly to defenders' expectations of what should be allowed.

Materials:

Tags:

Post a comment

Related work




Conference:  RSA Conference 2023
Authors: Bryan Vorndran, Bob Lawton, Dr. Christina Liaghati, Neil Serebryany
2023-04-24