Introduction

This website contains additional information about our paper “Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks” from SANDLAB at University of Chicago. In the paper, we identify forensics as a complement to defenses to address ML security problems. Forensics addresses successful attack incidence by tracking back to the attacker's identify. We apply forensics in tracing back data poisoning attacks. We design an effective traceback system that can accurately identify poison training data that are responsible for a given poison misclassification event.

Team



Shawn Shan



Arjun Nitin
Bhagoji


Heather Zheng



Ben Y. Zhao

For any inquery on this project, please checkout the FAQ and Github issues. If they cannot answer your question, please email Shawn at shawnshan@cs.uchicago.edu

Details

In adversarial machine learning, new defenses against attacks on deep learning systems are routinely broken soon after their release by more powerful attacks. In this context, forensic tools can offer a valuable complement to existing defenses, by tracing back a successful attack to its root cause, and offering a path forward for mitigation to prevent similar attacks in the future

In this paper, we describe our efforts in developing a forensic traceback tool for poison attacks on deep neural networks. We propose a novel iterative clustering and pruning solution that trims “innocent” training samples, until all that remains is the set of poisoned data responsible for the attack. Our method clusters training samples based on their impact on model parameters, then uses an efficient data unlearning method to prune innocent clusters. We empirically demonstrate the efficacy of our system on three types of dirty-label (backdoor) poison attacks and three types of clean-label poison attacks, across domains of computer vision and malware classification. Our system achieves over 98.4% precision and 96.8% recall across all attacks. We also show that our system is robust against four anti-forensics measures specifically designed to attack it.

The general scenario for our trackback system. a) the attacker poisoned the training data to inject vulnerability into the model; b) at run-time, the attacker submits an attack input to cause a misclassification event; c) our traceback system inspects the misclassification event to identify its root cause.

Paper

If you want to find out more about this project, you can read our publicly available paper and presentation slides. For readers who want to extend our work, we also provide source code on Github.




Cite the Paper

To cite our paper, you can use the following BibTex entry:

@inproceedings{shan2022poison,
  title={Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks},
  author={Shan, Shawn and Bhagoji, Arjun Nitin and Zheng, Haitao and Zhao, Ben Y},
  journal={Proc. of USENIX Security},
  year={2022}
}

FAQ

In the following, we want to answer some questions you might have:

Forensics is a complementary approach to defenses (or security through prevention). While there is a significant amount of prior works focusing on defenses against adversarial attacks, history (in both machine learning security and multiple other security areas) shows that no defense is perfect in practice, and attackers will find ways to circumvent even strong defenses. Forensics addresses the incident response of successful attacks by tracing back to the root causes. Modern security systems leverage both defenses and forensics to achieve maximum security. The same dynamic holds true in the context of poison attacks on neural networks. For example, a defense against backdoors that identifies poison training data can be circumvented by an attacker who breaches the server after the defense has been applied, but prior to model training.

Misclassification events can often be identified through downstream signal of the attack. Once attackers have breached the system through data poisoning, it is likely that they will take actions and cause damage. The damage (data leaked, server breached, self-driving car crashed) will make the system administrator be aware of potential breaches and traceback to potential poison attacks.