Adaptive Federated Learning
Adaptive federated learning (AFL) aims to improve the efficiency and robustness of federated learning (FL) by dynamically adjusting model training parameters based on the characteristics of the distributed data and network conditions. Current research focuses on developing adaptive algorithms, such as modifications of Adam and AdaGrad, that optimize learning rates and client selection strategies to address challenges like stragglers, non-IID data, and network heterogeneity. These advancements enhance FL's performance in diverse settings, leading to faster convergence, improved model accuracy, and increased privacy preservation, with significant implications for large-scale distributed machine learning applications.
Papers
October 2, 2024
March 22, 2024
March 11, 2024
February 15, 2024
October 6, 2023
July 12, 2023
June 19, 2023
March 28, 2023
March 27, 2023
January 23, 2023
January 11, 2023