Gradient Communication
Gradient communication in distributed machine learning aims to optimize the exchange of model updates between computing nodes, reducing the communication bottleneck that hinders scalability. Current research focuses on developing efficient compression techniques, such as one-bit quantization, sparsification, and novel aggregation methods like Bernoulli aggregation, often within the context of federated learning and parameter-server architectures. These advancements improve training speed and efficiency for large-scale models, impacting various applications from remote sensing image interpretation to general deep learning tasks by reducing computational costs and energy consumption.
Papers
July 1, 2024
May 17, 2024
March 13, 2024
December 29, 2023
November 9, 2023
October 18, 2022
June 7, 2022
February 2, 2022