Distributed Optimization

Distributed optimization tackles the challenge of minimizing a global objective function by coordinating computations across multiple agents, each possessing local data. Current research emphasizes improving convergence rates and reducing communication overhead through techniques like gradient compression, local updates, and the incorporation of momentum and adaptive methods (e.g., Adam, Shampoo). These advancements are crucial for scaling machine learning models to massive datasets and enabling efficient computation in distributed systems, with applications ranging from federated learning to multi-robot coordination and smart grids. Furthermore, significant effort is dedicated to developing robust algorithms resilient to communication noise, Byzantine failures, and heterogeneous network conditions.

Papers