Client Variance Reduction
Client variance reduction in federated learning aims to improve the efficiency and robustness of distributed model training by mitigating the impact of heterogeneous data and resources across numerous clients. Current research focuses on developing algorithms, such as primal-dual methods and adaptive update strategies, that effectively reduce client-level variance during model aggregation, often incorporating communication compression techniques to address bandwidth limitations. These advancements enhance the convergence speed and accuracy of federated learning, particularly in scenarios with non-independent and identically distributed (non-IID) data, leading to more practical and scalable applications in diverse domains.
Papers
October 16, 2024
December 3, 2022
July 18, 2022
December 24, 2021