Aggregated Gradient
Aggregated gradient methods are central to distributed machine learning, aiming to efficiently combine gradient updates from multiple sources (e.g., devices in federated learning) to train a shared model. Current research focuses on improving the robustness and privacy of aggregation techniques, addressing challenges like stragglers, data heterogeneity, and adversarial attacks through adaptive weighting schemes, coded computation, and secure aggregation protocols. These advancements are crucial for scaling machine learning to larger datasets and diverse environments while mitigating privacy risks and improving model accuracy and efficiency in various applications.
Papers
November 12, 2024
November 6, 2024
October 21, 2024
October 11, 2024
June 22, 2024
March 22, 2024
March 3, 2024
February 6, 2024
October 4, 2023
October 2, 2023
September 21, 2023
September 10, 2023
August 17, 2023
August 11, 2023
June 7, 2023
March 28, 2023
February 9, 2023
January 25, 2023
January 17, 2023