Book contents
- Frontmatter
- Contents
- List of Symbols
- Acknowledgments
- Part I Overview of Adversarial Machine Learning
- Part II Causative Attacks on Machine Learning
- 4 Attacking a Hypersphere Learner
- 5 Availability Attack Case Study: SpamBayes
- 6 Integrity Attack Case Study: PCA Detector
- Part III Exploratory Attacks on Machine Learning
- Part IV Future Directions in Adversarial Machine Learning
- Part V Appendixes
- Glossary
- References
- Index
4 - Attacking a Hypersphere Learner
from Part II - Causative Attacks on Machine Learning
Published online by Cambridge University Press: 14 March 2019
- Frontmatter
- Contents
- List of Symbols
- Acknowledgments
- Part I Overview of Adversarial Machine Learning
- Part II Causative Attacks on Machine Learning
- 4 Attacking a Hypersphere Learner
- 5 Availability Attack Case Study: SpamBayes
- 6 Integrity Attack Case Study: PCA Detector
- Part III Exploratory Attacks on Machine Learning
- Part IV Future Directions in Adversarial Machine Learning
- Part V Appendixes
- Glossary
- References
- Index
Summary
In the second part of this book, we elaborate on Causative attacks, in which an adversary actively mistrains a learner by influencing the training data. We begin in this chapter by considering a simple adversarial learning game that can be theoretically analyzed. In particular, we examine the effect of malicious data in the learning task of anomaly (or outlier) detection. Anomaly detectors are often employed for identifying novel malicious activities such as sending virus-laden email or misusing network-based resources. Because anomaly detectors often serve a role as a component of learning-based detection systems, they are a probable target for attacks. Here we analyze potential attacks specifically against hypersphere-based anomaly detectors, for which a learned hypersphere is used to define the region of normal data and all data that lies outside of this hypersphere's boundary are considered to be anomalous. Hypersphere detectors are used for anomaly detection because they provide an intuitive notion for capturing a subspace of normal points. These detectors are simple to train, and learning algorithms for hypersphere detectors can be kernelized, that is implicitly extended into higher dimensional spaces via a kernel function (Forrest et al. 1996; Rieck & Laskov 2006; Rieck & Laskov 2007; Wang & Stolfo 2004; Wang et al. 2006; Warrender et al. 1999). For our purposes in this chapter, hypersphere models provide a theoretical basis for understanding the types of attacks that can occur and their potential impact in a variety of different settings. The results we present in this chapter provide intriguing insights into the threat of causative attacks. Then, in Chapter 5 and 6, we proceed to describe practical studies of causative attacks motivated by real-world applications of machine learning algorithms.
The topic of hypersphere poisoning first arose in designing virus and intrusion detection systems for which anomaly detectors (including hypersphere detectors) have been used to identify abnormal emails or network packets, and therefor are targets for attacks. This line of work sought to investigate the vulnerability of proposed learning algorithms to adversarial contamination. The threat of an adversary systematically misleading an outlier detector led to the construction of a theoretical model for analyzing the impact of contamination. Nelson (2005) and Nelson & Joseph (2006) first analyzed a simple algorithm for anomaly detection based on bounding the normal data in a mean-centered hypersphere of fixed radius as depicted in Figure 4.1(a).
- Type
- Chapter
- Information
- Adversarial Machine Learning , pp. 69 - 104Publisher: Cambridge University PressPrint publication year: 2019