Paper ID: 2207.08486 • Published Jul 18, 2022
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Ali Raza, Shujun Li, Kim-Phuc Tran, Ludovic Koehl, Kim Duc Tran
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Adversarial attacks such as poisoning attacks have attracted the attention of
many machine learning researchers. Traditionally, poisoning attacks attempt to
inject adversarial training data in order to manipulate the trained model. In
federated learning (FL), data poisoning attacks can be generalized to model
poisoning attacks, which cannot be detected by simpler methods due to the lack
of access to local training data by the detector. State-of-the-art poisoning
attack detection methods for FL have various weaknesses, e.g., the number of
attackers has to be known or not high enough, working with i.i.d. data only,
and high computational complexity. To overcome above weaknesses, we propose a
novel framework for detecting poisoning attacks in FL, which employs a
reference model based on a public dataset and an auditor model to detect
malicious updates. We implemented a detector based on the proposed framework
and using a one-class support vector machine (OC-SVM), which reaches the lowest
possible computational complexity O(K) where K is the number of clients. We
evaluated our detector's performance against state-of-the-art (SOTA) poisoning
attacks for two typical applications of FL: electrocardiograph (ECG)
classification and human activity recognition (HAR). Our experimental results
validated the performance of our detector over other SOTA detection methods.