Decentralized Stochastic Gradient Descent
Decentralized Stochastic Gradient Descent (D-SGD) is a distributed optimization technique enabling collaborative model training across multiple devices without a central server, aiming to improve efficiency and scalability in machine learning. Current research focuses on enhancing D-SGD's convergence speed, generalization ability, and robustness to communication constraints and data heterogeneity, exploring various algorithmic improvements and communication topologies. These advancements are significant for large-scale machine learning applications, offering solutions for privacy-preserving training and improved performance in resource-limited environments.
Papers
May 18, 2024
October 31, 2023
June 5, 2023
June 1, 2023
February 13, 2023
January 14, 2023
December 6, 2022
December 5, 2022
August 29, 2022
June 25, 2022
May 13, 2022
April 9, 2022
December 17, 2021
December 2, 2021