Paper ID: 2303.15799

FedAgg: Adaptive Federated Learning with Aggregated Gradients

Wenhao Yuan, Xuehe Wang

Federated Learning (FL) has emerged as a crucial distributed training paradigm, enabling discrete devices to collaboratively train a shared model under the coordination of a central server, while leveraging their locally stored private data. Nonetheless, the non-independent-and-identically-distributed (Non-IID) data generated on heterogeneous clients and the incessant information exchange among participants may significantly impede training efficacy, retard the model convergence rate and increase the risk of privacy leakage. To alleviate the divergence between the local and average model parameters and obtain a fast model convergence rate, we propose an adaptive FEDerated learning algorithm called FedAgg by refining the conventional stochastic gradient descent (SGD) methodology with an AGgregated Gradient term at each local training epoch and adaptively adjusting the learning rate based on a penalty term that quantifies the local model deviation. To tackle the challenge of information exchange among clients during local training and design a decentralized adaptive learning rate for each client, we introduce two mean-field terms to approximate the average local parameters and gradients over time. Through rigorous theoretical analysis, we demonstrate the existence and convergence of the mean-field terms and provide a robust upper bound on the convergence of our proposed algorithm. The extensive experimental results on real-world datasets substantiate the superiority of our framework in comparison with existing state-of-the-art FL strategies for enhancing model performance and accelerating convergence rate under IID and Non-IID datasets.

Submitted: Mar 28, 2023