Decentralized Nonconvex
Decentralized nonconvex optimization focuses on solving complex optimization problems where data is distributed across multiple agents, and the objective function is non-convex, lacking the convenient properties of convex functions. Current research emphasizes developing efficient algorithms that address communication bottlenecks inherent in distributed settings, often incorporating techniques like gradient tracking, communication compression, and variance reduction to improve convergence rates. These advancements are crucial for scaling machine learning models to massive datasets and for enabling privacy-preserving distributed computations, impacting fields like federated learning and distributed sensor networks.
Papers
May 30, 2024
May 28, 2024
May 19, 2024
February 4, 2024
August 31, 2023
May 17, 2023
April 24, 2023
April 5, 2023
March 31, 2023
December 14, 2022
October 25, 2022
October 14, 2022
May 23, 2022
January 31, 2022