Decentralized Proximal
Decentralized proximal methods aim to solve optimization problems across distributed networks without a central server, focusing on efficiency and robustness. Current research emphasizes developing algorithms that incorporate techniques like gradient compression, variance reduction, and switching between different gradient oracles to improve convergence speed and reduce communication overhead, often applied to non-convex and saddle-point problems. These advancements are significant for large-scale machine learning, enabling faster training of complex models and reducing the computational burden associated with distributed optimization.
Papers
November 9, 2023
September 2, 2023
February 7, 2023
May 28, 2022
December 20, 2021