Communication Compression
Communication compression aims to reduce the bandwidth demands of distributed machine learning, a critical bottleneck in large-scale applications like federated learning. Current research focuses on developing algorithms that incorporate compression techniques (e.g., quantization, sparsification) while maintaining model accuracy, often employing strategies like error feedback and adaptive compression levels within decentralized and federated learning frameworks. These advancements are significant because they enable efficient training of complex models on resource-constrained devices and across geographically dispersed datasets, impacting both the scalability of machine learning and its applicability in bandwidth-limited environments.
Papers
November 14, 2024
September 6, 2024
May 30, 2024
May 22, 2024
April 21, 2024
April 9, 2024
November 6, 2023
October 29, 2023
October 15, 2023
August 16, 2023
May 25, 2023
May 17, 2023
May 12, 2023
January 24, 2023
October 31, 2022
October 24, 2022
September 12, 2022
June 20, 2022
June 16, 2022