FedAvg Converges

Federated Averaging (FedAvg) is a core algorithm in federated learning, aiming to collaboratively train machine learning models across decentralized devices without directly sharing data. Current research focuses on improving FedAvg's robustness and efficiency, particularly addressing challenges posed by non-identical data distributions across devices, unreliable communication, and heterogeneous computational capabilities. This involves exploring variations like personalized FedAvg, asynchronous methods, and incorporating techniques such as momentum and model extrapolation to accelerate convergence and enhance performance. These advancements are crucial for enabling practical applications of federated learning in diverse domains while preserving data privacy.

Papers