This website contains additional information about our paper “Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks” from SANDLAB at University of Chicago. In the paper, we identify forensics as a complement to defenses to address ML security problems. Forensics addresses successful attack incidence by tracking back to the attacker's identify. We apply forensics in tracing back data poisoning attacks. We design an effective traceback system that can accurately identify poison training data that are responsible for a given poison misclassification event.
![]() Shawn Shan |
![]() Arjun Nitin
|
![]() Heather Zheng |
![]() Ben Y. Zhao |
For any inquery on this project, please checkout the FAQ and Github issues. If they cannot answer your question, please email Shawn at shawnshan@cs.uchicago.edu
In adversarial machine learning, new defenses against attacks on deep learning systems are routinely broken soon after their release by more powerful attacks. In this context, forensic tools can offer a valuable complement to existing defenses, by tracing back a successful attack to its root cause, and offering a path forward for mitigation to prevent similar attacks in the future
In this paper, we describe our efforts in developing a forensic traceback tool for poison attacks on deep neural networks. We propose a novel iterative clustering and pruning solution that trims “innocent” training samples, until all that remains is the set of poisoned data responsible for the attack. Our method clusters training samples based on their impact on model parameters, then uses an efficient data unlearning method to prune innocent clusters. We empirically demonstrate the efficacy of our system on three types of dirty-label (backdoor) poison attacks and three types of clean-label poison attacks, across domains of computer vision and malware classification. Our system achieves over 98.4% precision and 96.8% recall across all attacks. We also show that our system is robust against four anti-forensics measures specifically designed to attack it.

The general scenario for our trackback system. a) the attacker poisoned the training data to inject vulnerability into the model; b) at run-time, the attacker submits an attack input to cause a misclassification event; c) our traceback system inspects the misclassification event to identify its root cause.
If you want to find out more about this project, you can read our publicly available paper and presentation slides. For readers who want to extend our work, we also provide source code on Github.
To cite our paper, you can use the following BibTex entry:
@inproceedings{shan2022poison,
title={Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks},
author={Shan, Shawn and Bhagoji, Arjun Nitin and Zheng, Haitao and Zhao, Ben Y},
journal={Proc. of USENIX Security},
year={2022}
}
In the following, we want to answer some questions you might have: