Bagging classifiers for fighting poisoning attacks in adversarial classification tasks
Description
Pattern recognition systems have been widely used in adversarial classification tasks like spam filtering and intrusion detection in computer networks. In these applications a malicious adversary may successfully mislead a classifier by "poisoning" its training data with carefully designed attacks. Bagging is a well-known ensemble construction method, where each classifier in the ensemble is trained on a different bootstrap replicate of the training set. Recent work has shown that bagging can reduce the influence of outliers in training data, especially if the most outlying observations are resampled with a lower probability. In this work we argue that poisoning attacks can be viewed as a particular category of outliers, and, thus, bagging ensembles may be effectively exploited against them. We experimentally assess the effectiveness of bagging on a real, widely used spam filter, and on a web-based intrusion detection system. Our preliminary results suggest that bagging ensembles can be a very promising defence strategy against poisoning attacks, and give us valuable insights for future research work.
Additional details
- URL
- https://hdl.handle.net/11567/1161321
- URN
- urn:oai:iris.unige.it:11567/1161321
- Origin repository
- UNIGE