Model Poisoning Attack

Model poisoning attacks target federated learning (FL) systems by injecting malicious model updates from compromised clients, aiming to degrade the global model's accuracy or introduce biases. Current research focuses on developing robust aggregation techniques, such as those employing uncertainty-aware evaluation, dynamic weighting, or Fourier transforms, to identify and mitigate the effects of these poisoned updates, often in conjunction with client-side defenses. Understanding and defending against these attacks is crucial for ensuring the reliability and trustworthiness of FL, a technology with significant potential for privacy-preserving machine learning across diverse applications.

Papers