Byzantine Robust Federated Learning

Byzantine-robust federated learning (FL) aims to secure collaborative model training across multiple clients against malicious actors (Byzantine clients) who inject faulty updates. Current research focuses on developing robust aggregation algorithms, often employing techniques like geometric medians, normalized gradients, or layer-adaptive sparsification, to filter out malicious updates and ensure convergence even with a significant fraction of compromised clients. These advancements are crucial for deploying secure and reliable FL systems in various applications, addressing vulnerabilities to data poisoning and model manipulation attacks while maintaining efficiency and privacy. The field is actively exploring both centralized and decentralized architectures, incorporating techniques like clustering and asynchronous updates to improve robustness and scalability.

Papers