Model Poisoning Attack
Model poisoning attacks target federated learning (FL) systems by injecting malicious model updates from compromised clients, aiming to degrade the global model's accuracy or introduce biases. Current research focuses on developing robust aggregation techniques, such as those employing uncertainty-aware evaluation, dynamic weighting, or Fourier transforms, to identify and mitigate the effects of these poisoned updates, often in conjunction with client-side defenses. Understanding and defending against these attacks is crucial for ensuring the reliability and trustworthiness of FL, a technology with significant potential for privacy-preserving machine learning across diverse applications.
Papers
December 5, 2022
December 4, 2022
November 7, 2022
October 17, 2022
August 16, 2022
July 19, 2022
April 29, 2022
April 22, 2022
April 1, 2022
March 22, 2022
March 16, 2022
February 6, 2022