Sparse Communication
Sparse communication in machine learning aims to reduce the amount of data exchanged during model training and inference, improving efficiency and scalability, particularly in distributed and federated learning settings. Current research focuses on developing efficient algorithms like sparse gradient accumulation methods and novel layer architectures (e.g., basis-projected layers) to minimize communication overhead while maintaining accuracy. These advancements are crucial for deploying large models on resource-constrained devices and accelerating training in scenarios with limited bandwidth, impacting fields ranging from federated learning to multi-agent systems and brain-computer interfaces.
Papers
June 7, 2024
March 14, 2024
April 3, 2023
February 28, 2023
February 22, 2023
November 30, 2022
October 7, 2022