Arbitrary Delay
Arbitrary delay in distributed optimization and machine learning focuses on developing algorithms robust to unpredictable communication lags and asynchronous updates in parallel or decentralized systems. Current research emphasizes asynchronous methods, often employing mini-batching or gossip-based algorithms, to mitigate the impact of delays, with a focus on achieving convergence rates comparable to centralized methods. This research is crucial for enabling efficient and scalable machine learning across diverse applications, including federated learning and multi-agent systems, where communication delays are inherent.
Papers
August 14, 2024
May 29, 2024
May 16, 2024
March 25, 2024
January 20, 2024
May 20, 2023
May 11, 2023
October 5, 2022
June 15, 2022