Client Drift
Client drift in federated learning (FL) refers to the divergence of locally trained models on individual devices from a globally shared model, hindering the overall learning process due to data heterogeneity across clients. Current research focuses on mitigating this drift through techniques like gradient correction, adaptive bias estimation, and self-distillation, often integrated into existing FL algorithms such as FedAvg, or employing novel approaches like gradual model unfreezing. Addressing client drift is crucial for improving the accuracy and efficiency of FL, enabling robust and privacy-preserving distributed machine learning across diverse datasets and applications.
Papers
September 27, 2024
September 1, 2023
August 20, 2023
July 19, 2023
May 31, 2023
December 5, 2022
April 27, 2022