Decentralized Optimization
Decentralized optimization focuses on solving large-scale optimization problems by distributing computation and data across a network of agents, without relying on a central server. Current research emphasizes developing efficient algorithms, such as those based on gradient tracking, ADMM, and momentum methods, often incorporating techniques like compression and asynchronous updates to improve communication efficiency and robustness to network delays and failures. This field is crucial for addressing privacy concerns in machine learning, enabling large-scale training of models on distributed datasets, and facilitating coordination in multi-agent systems across diverse applications like smart grids and autonomous vehicle control.
37papers
Papers
March 25, 2025
January 31, 2025
September 18, 2024
August 19, 2024
August 16, 2024
June 19, 2024
March 23, 2024
February 5, 2024
January 20, 2024