Client Drift

Client drift in federated learning (FL) refers to the divergence of locally trained models on individual devices from a globally shared model, hindering the overall learning process due to data heterogeneity across clients. Current research focuses on mitigating this drift through techniques like gradient correction, adaptive bias estimation, and self-distillation, often integrated into existing FL algorithms such as FedAvg, or employing novel approaches like gradual model unfreezing. Addressing client drift is crucial for improving the accuracy and efficiency of FL, enabling robust and privacy-preserving distributed machine learning across diverse datasets and applications.

Papers