Decentralized SGD
Decentralized Stochastic Gradient Descent (D-SGD) is a distributed machine learning approach aiming to train large models efficiently and privately across multiple agents without a central server. Current research focuses on improving D-SGD's convergence speed and robustness by optimizing communication topologies, addressing data heterogeneity, and mitigating issues like the "entrapment problem" in random walk algorithms. These advancements are significant because they enable scalable training of complex models on massive datasets while preserving data privacy and reducing communication overhead, impacting various fields from federated learning to Internet of Things applications.
Papers
September 26, 2024
July 30, 2024
May 2, 2024
November 1, 2023
June 7, 2023
June 5, 2023
May 19, 2023
March 1, 2023
October 14, 2022
July 8, 2022
June 25, 2022
May 13, 2022
April 9, 2022
March 13, 2022
December 2, 2021