Optimal Communication
Optimal communication in distributed systems focuses on minimizing communication overhead while maintaining performance in tasks like distributed optimization and federated learning. Current research emphasizes developing algorithms that achieve near-optimal communication costs, often through techniques like tailored weight selection in distributed optimization, memory deduplication in tensor parallelism (e.g., Rotated Tensor Parallelism), and efficient secure aggregation protocols (e.g., SwiftAgg+). These advancements are crucial for scaling machine learning models to larger datasets and more complex architectures, impacting fields ranging from large-scale data analysis to privacy-preserving collaborative learning.
Papers
February 8, 2024
November 2, 2023
May 26, 2022
March 24, 2022