Decentralized Optimization
Decentralized optimization focuses on solving large-scale optimization problems by distributing computation and data across a network of agents, without relying on a central server. Current research emphasizes developing efficient algorithms, such as those based on gradient tracking, ADMM, and momentum methods, often incorporating techniques like compression and asynchronous updates to improve communication efficiency and robustness to network delays and failures. This field is crucial for addressing privacy concerns in machine learning, enabling large-scale training of models on distributed datasets, and facilitating coordination in multi-agent systems across diverse applications like smart grids and autonomous vehicle control.
Papers
September 18, 2024
August 19, 2024
August 16, 2024
June 19, 2024
May 31, 2024
May 30, 2024
May 29, 2024
May 19, 2024
May 6, 2024
April 3, 2024
March 23, 2024
February 8, 2024
February 5, 2024
January 20, 2024
January 4, 2024
July 25, 2023
June 11, 2023
June 5, 2023
March 16, 2023
January 19, 2023