Communication Efficient Federated Learning
Communication-efficient federated learning (FL) aims to overcome the communication bottleneck inherent in distributed machine learning by minimizing the data exchanged between clients and a central server during model training. Current research focuses on techniques like model sparsification, quantization, and the use of second-order optimization methods (e.g., incorporating Hessian information) to reduce communication overhead while maintaining accuracy. These advancements are crucial for enabling the practical deployment of FL in resource-constrained environments and for scaling FL to larger models and datasets, impacting various applications from IoT to healthcare.
Papers
June 10, 2024
June 1, 2024
May 30, 2024
May 28, 2024
April 17, 2024
December 6, 2023
November 29, 2023
October 27, 2023
August 31, 2023
August 29, 2023
August 7, 2023
August 1, 2023
May 21, 2023
April 21, 2023
February 27, 2023
December 11, 2022
June 21, 2022
February 5, 2022