Decentralized Optimization
Decentralized optimization focuses on solving large-scale optimization problems by distributing computation and data across a network of agents, without relying on a central server. Current research emphasizes developing efficient algorithms, such as those based on gradient tracking, ADMM, and momentum methods, often incorporating techniques like compression and asynchronous updates to improve communication efficiency and robustness to network delays and failures. This field is crucial for addressing privacy concerns in machine learning, enabling large-scale training of models on distributed datasets, and facilitating coordination in multi-agent systems across diverse applications like smart grids and autonomous vehicle control.
Papers
Communication-Efficient Topologies for Decentralized Learning with $O(1)$ Consensus Rate
Zhuoqing Song, Weijian Li, Kexin Jin, Lei Shi, Ming Yan, Wotao Yin, Kun Yuan
Revisiting Optimal Convergence Rate for Smooth and Non-convex Stochastic Decentralized Optimization
Kun Yuan, Xinmeng Huang, Yiming Chen, Xiaohan Zhang, Yingya Zhang, Pan Pan
Hybrid Decentralized Optimization: Leveraging Both First- and Zeroth-Order Optimizers for Faster Convergence
Matin Ansaripour, Shayan Talaei, Giorgi Nadiradze, Dan Alistarh