FedAvg Algorithm

Federated Averaging (FedAvg) is a core algorithm in federated learning, aiming to collaboratively train a shared machine learning model across multiple decentralized devices without directly sharing their data. Current research focuses on improving FedAvg's robustness to non-independent and identically distributed (non-IID) data, mitigating communication overhead through techniques like model compression and efficient aggregation strategies (e.g., FedAvgM, FedProx), and addressing challenges like client dropout and Byzantine attacks. These advancements enhance the privacy and efficiency of federated learning, impacting diverse applications ranging from IoT security to personalized healthcare.

Papers