Arbitrary Delay

Arbitrary delay in distributed optimization and machine learning focuses on developing algorithms robust to unpredictable communication lags and asynchronous updates in parallel or decentralized systems. Current research emphasizes asynchronous methods, often employing mini-batching or gossip-based algorithms, to mitigate the impact of delays, with a focus on achieving convergence rates comparable to centralized methods. This research is crucial for enabling efficient and scalable machine learning across diverse applications, including federated learning and multi-agent systems, where communication delays are inherent.

Papers