Model Poisoning Attack
Model poisoning attacks target federated learning (FL) systems by injecting malicious model updates from compromised clients, aiming to degrade the global model's accuracy or introduce biases. Current research focuses on developing robust aggregation techniques, such as those employing uncertainty-aware evaluation, dynamic weighting, or Fourier transforms, to identify and mitigate the effects of these poisoned updates, often in conjunction with client-side defenses. Understanding and defending against these attacks is crucial for ensuring the reliability and trustworthiness of FL, a technology with significant potential for privacy-preserving machine learning across diverse applications.
Papers
October 2, 2024
September 30, 2024
September 12, 2024
August 5, 2024
June 20, 2024
June 3, 2024
May 31, 2024
April 23, 2024
March 19, 2024
February 15, 2024
November 30, 2023
November 17, 2023
October 20, 2023
August 4, 2023
July 18, 2023
May 3, 2023
April 21, 2023
March 29, 2023
March 7, 2023