Book contents
- Frontmatter
- Contents
- List of Symbols
- Acknowledgments
- Part I Overview of Adversarial Machine Learning
- Part II Causative Attacks on Machine Learning
- 4 Attacking a Hypersphere Learner
- 5 Availability Attack Case Study: SpamBayes
- 6 Integrity Attack Case Study: PCA Detector
- Part III Exploratory Attacks on Machine Learning
- Part IV Future Directions in Adversarial Machine Learning
- Part V Appendixes
- Glossary
- References
- Index
6 - Integrity Attack Case Study: PCA Detector
from Part II - Causative Attacks on Machine Learning
Published online by Cambridge University Press: 14 March 2019
- Frontmatter
- Contents
- List of Symbols
- Acknowledgments
- Part I Overview of Adversarial Machine Learning
- Part II Causative Attacks on Machine Learning
- 4 Attacking a Hypersphere Learner
- 5 Availability Attack Case Study: SpamBayes
- 6 Integrity Attack Case Study: PCA Detector
- Part III Exploratory Attacks on Machine Learning
- Part IV Future Directions in Adversarial Machine Learning
- Part V Appendixes
- Glossary
- References
- Index
Summary
Adversaries can use Causative attacks to not only disrupt normal user activity (as we demonstrated in Chapter 5) but also to evade the detector by causing it to have many false negatives through an Integrity attack. In doing so, such adversaries can reduce the odds that their malicious activities are successfully detected. This chapter presents a case study of the subspace anomaly detection methods introduced by Lakhina et al. (2004b) for detecting network-wide anomalies such as denial-of-service (DoS) attacks based on the dimensionality reduction technique commonly known as principal component analysis (PCA) (Pearson 1901). We show that by injecting crafty extraneous noise, or chaff, into the network during training, the PCA-based detector can be poisoned so it is unable to effectively detect a subsequent DoS attack.We also demonstrate defenses against these attacks. Specifically, by replacing PCA with a more robust alternative subspace estimation procedure, we show that the resulting detector is resilient to poisoning and maintains a significantly lower false-positive rate when poisoned.
The PCA-based detector we analyze was first proposed by Lakhina et al. (2004b) as method for identifying volume anomalies in a backbone network. This basic technique led to a variety of extensions of the original method (e.g., Lakhina, Crovella & Diot 2004a, 2005a, 2005b) and to related techniques for addressing the problem of diagnosing large-volume network anomalies (e.g., Brauckhoff, Salamatian, & May 2009; Huang, Nguyen, Garofalakis, Jordan, Joseph, & Taft 2007; Li, Bian, Crovella, Diot, Govindan, Iannaccone, & Lakhina 2006; Ringberg, Soule, Rexford, & Diot 2007; Zhang, Ge, Greenberg, & Roughan 2005).While their subspace-based method is able to successfully detect DoS attacks in the network traffic, to do so it assumes the detector is trained on nonmalicious data (in an unsupervised fashion under the setting of anomaly detection). Instead, we consider an adversary that knows that an ISP is using a subspacebased anomaly detector and attempts to evade it by proactively poisoning the training data.
We consider an adversary whose goal is to circumvent detection by poisoning the training data; i.e., an Integrity goal to increase the detector's false-negative rate, which corresponds to the evasion success rate of the attacker's subsequent DoS attack. When trained on this poisoned data, the detector learns a distorted set of principal components that are unable to effectively discern the desired DoS attacks—a Targeted attack.
- Type
- Chapter
- Information
- Adversarial Machine Learning , pp. 134 - 164Publisher: Cambridge University PressPrint publication year: 2019