Communication Efficient Federated Learning

Communication-efficient federated learning (FL) aims to overcome the communication bottleneck inherent in distributed machine learning by minimizing the data exchanged between clients and a central server during model training. Current research focuses on techniques like model sparsification, quantization, and the use of second-order optimization methods (e.g., incorporating Hessian information) to reduce communication overhead while maintaining accuracy. These advancements are crucial for enabling the practical deployment of FL in resource-constrained environments and for scaling FL to larger models and datasets, impacting various applications from IoT to healthcare.

Papers