Adaptive Federated Learning

Adaptive federated learning (AFL) aims to improve the efficiency and robustness of federated learning (FL) by dynamically adjusting model training parameters based on the characteristics of the distributed data and network conditions. Current research focuses on developing adaptive algorithms, such as modifications of Adam and AdaGrad, that optimize learning rates and client selection strategies to address challenges like stragglers, non-IID data, and network heterogeneity. These advancements enhance FL's performance in diverse settings, leading to faster convergence, improved model accuracy, and increased privacy preservation, with significant implications for large-scale distributed machine learning applications.

Papers