Decentralized Gradient
Decentralized gradient methods aim to collaboratively train machine learning models across distributed networks without a central server, prioritizing data privacy and efficiency. Current research emphasizes developing algorithms like decentralized gradient descent and gradient tracking, often incorporating local updates and asynchronous communication to improve convergence speed and robustness against data heterogeneity and network challenges. This area is significant for enabling large-scale machine learning applications in resource-constrained environments and enhancing data privacy, with ongoing work focusing on addressing communication efficiency, convergence guarantees, and vulnerabilities to privacy attacks.
Papers
June 19, 2024
June 4, 2024
May 7, 2024
March 23, 2024
February 15, 2024
April 14, 2023
April 5, 2023
January 3, 2023
November 19, 2022
October 5, 2022
September 15, 2022
May 31, 2022