Decentralized Optimization
Decentralized optimization focuses on solving large-scale optimization problems by distributing computation and data across a network of agents, without relying on a central server. Current research emphasizes developing efficient algorithms, such as those based on gradient tracking, ADMM, and momentum methods, often incorporating techniques like compression and asynchronous updates to improve communication efficiency and robustness to network delays and failures. This field is crucial for addressing privacy concerns in machine learning, enabling large-scale training of models on distributed datasets, and facilitating coordination in multi-agent systems across diverse applications like smart grids and autonomous vehicle control.
37papers
Papers
February 1, 2025
January 31, 2025
January 30, 2025
September 18, 2024
August 16, 2024
May 29, 2024
March 23, 2024
February 8, 2024