Asynchronous Federated Learning
Asynchronous federated learning (AFL) aims to improve the efficiency and scalability of federated learning by allowing clients to update a shared model independently and at their own pace, rather than synchronously. Current research focuses on addressing challenges like staleness of updates, heterogeneity in client resources and data distributions, and robustness against Byzantine failures, often employing techniques such as buffered aggregation, gradient compression, and adaptive client selection. These advancements enhance the practicality of federated learning for resource-constrained and decentralized environments, with applications ranging from IoT networks and autonomous driving to geo-distributed systems and satellite constellations.
Papers
DRACO: Decentralized Asynchronous Federated Learning over Continuous Row-Stochastic Network Matrices
Eunjeong Jeong, Marios Kountouris
A Resource-Adaptive Approach for Federated Learning under Resource-Constrained Environments
Ruirui Zhang, Xingze Wu, Yifei Zou, Zhenzhen Xie, Peng Li, Xiuzhen Cheng, Dongxiao Yu