Compressed Communication

Compressed communication aims to reduce the substantial communication overhead in distributed machine learning, particularly in federated learning and decentralized settings, by transmitting only a subset of model updates. Current research focuses on developing efficient compression techniques, such as one-bit quantization and randomized transforms, within various algorithms like stochastic gradient descent and operator splitting methods, often incorporating error feedback mechanisms to improve convergence rates. These advancements are crucial for scaling machine learning to larger datasets and more complex models, impacting both the efficiency of training and the feasibility of deploying these methods in resource-constrained environments.

Papers