Paper ID: 2303.14966
Adaptive Federated Learning via New Entropy Approach
Shensheng Zheng, Wenhao Yuan, Xuehe Wang, Lingjie Duan
Federated Learning (FL) has emerged as a prominent distributed machine learning framework that enables geographically discrete clients to train a global model collaboratively while preserving their privacy-sensitive data. However, due to the non-independent-and-identically-distributed (Non-IID) data generated by heterogeneous clients, the performances of the conventional federated optimization schemes such as FedAvg and its variants deteriorate, requiring the design to adaptively adjust specific model parameters to alleviate the negative influence of heterogeneity. In this paper, by leveraging entropy as a new metric for assessing the degree of system disorder, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients and achieve fast convergence. Nevertheless, given the data disparity and parameter deviation of heterogeneous clients, determining the optimal dynamic learning rate for each client becomes a challenging task as there is no communication among participating clients during the local training epochs. To enable a decentralized learning rate for each participating client, we first introduce the mean-field terms to estimate the components associated with other clients' local parameters. Furthermore, we provide rigorous theoretical analysis on the existence and determination of the mean-field estimators. Based on the mean-field estimators, the closed-form adaptive learning rate for each client is derived by constructing the Hamilton equation. Moreover, the convergence rate of our proposed FedEnt is proved. The extensive experimental results on the real-world datasets (i.e., MNIST, EMNIST-L, CIFAR10, and CIFAR100) show that our FedEnt algorithm surpasses FedAvg and its variants (i.e., FedAdam, FedProx, and FedDyn) under Non-IID settings and achieves a faster convergence rate.
Submitted: Mar 27, 2023