Decentralized Stochastic
Decentralized stochastic methods address the challenge of optimizing functions across distributed networks of computing agents, aiming for efficient and robust solutions without a central coordinator. Current research focuses on improving the convergence speed and communication efficiency of algorithms like decentralized stochastic gradient descent (D-SGD) and its variants, often incorporating techniques like variance reduction and consensus-based approaches, sometimes within the framework of variational inequalities. These advancements are crucial for scalable machine learning, distributed control systems, and other applications requiring privacy-preserving collaborative computation across multiple devices or agents.
Papers
November 10, 2024
October 31, 2024
August 25, 2024
October 24, 2023
July 14, 2023
September 19, 2022
September 12, 2022
May 8, 2022
February 22, 2022
February 6, 2022
February 2, 2022
December 20, 2021