Decentralized Optimization

Decentralized optimization focuses on solving large-scale optimization problems by distributing computation and data across a network of agents, without relying on a central server. Current research emphasizes developing efficient algorithms, such as those based on gradient tracking, ADMM, and momentum methods, often incorporating techniques like compression and asynchronous updates to improve communication efficiency and robustness to network delays and failures. This field is crucial for addressing privacy concerns in machine learning, enabling large-scale training of models on distributed datasets, and facilitating coordination in multi-agent systems across diverse applications like smart grids and autonomous vehicle control.

Papers