Aggregation Defense
Aggregation defense in federated learning aims to protect collaboratively trained models from malicious data poisoning and backdoor attacks, ensuring the integrity and reliability of the final model. Current research focuses on developing robust aggregation schemes, such as deep partition aggregation, and analyzing their efficiency and effectiveness against increasingly sophisticated attacks, including those leveraging generative models and reinforcement learning. Understanding the practical limitations and vulnerabilities of these defenses is crucial for building secure and trustworthy federated learning systems, impacting the development of privacy-preserving machine learning applications.
Papers
March 10, 2024
October 17, 2023
June 28, 2023