Communication Complexity
Communication complexity studies the amount of communication needed for distributed computation, aiming to minimize this cost while maintaining accuracy. Current research focuses on optimizing communication in various machine learning settings, including federated learning and decentralized optimization, often employing techniques like local updates, compression, and variance reduction within algorithms such as gradient tracking and primal-dual methods. These advancements are crucial for scaling machine learning to massive datasets and resource-constrained environments, impacting fields like distributed training, multi-agent systems, and privacy-preserving computation.
Papers
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities
Aleksandr Beznosikov, Martin Takáč, Alexander Gasnikov
On-Demand Communication for Asynchronous Multi-Agent Bandits
Yu-Zhen Janice Chen, Lin Yang, Xuchuang Wang, Xutong Liu, Mohammad Hajiesmaili, John C. S. Lui, Don Towsley
An Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization
Lesi Chen, Haishan Ye, Luo Luo
DIAMOND: Taming Sample and Communication Complexities in Decentralized Bilevel Optimization
Peiwen Qiu, Yining Li, Zhuqing Liu, Prashant Khanduri, Jia Liu, Ness B. Shroff, Elizabeth Serena Bentley, Kurt Turck