Gradient Compression
Gradient compression aims to reduce the communication overhead in distributed machine learning by transmitting smaller representations of model updates (gradients). Current research focuses on developing novel compression techniques, including quantization, sparsification, low-rank approximation, and the use of large language models as gradient priors, often incorporating error feedback mechanisms to mitigate information loss. These advancements are crucial for scaling up training of large models like LLMs and for enabling efficient federated learning in resource-constrained environments, ultimately accelerating training speed and reducing energy consumption.
Papers
May 20, 2023
February 19, 2023
February 16, 2023
February 6, 2023
January 23, 2023
January 17, 2023
January 8, 2023
December 19, 2022
November 25, 2022
November 1, 2022
October 24, 2022
October 19, 2022
October 14, 2022
September 30, 2022
June 28, 2022
June 14, 2022
June 8, 2022
May 28, 2022
May 11, 2022
May 9, 2022