Decentralized Training
Decentralized training focuses on collaboratively training machine learning models across multiple devices or agents without relying on a central server, prioritizing data privacy and scalability. Current research emphasizes efficient algorithms like federated learning and its variants (e.g., asynchronous and personalized federated learning), addressing challenges such as data heterogeneity, communication overhead, and robustness to adversarial attacks. This approach is significant for enabling large-scale model training with privacy preservation and for deploying AI in resource-constrained or distributed environments, impacting fields ranging from mobile computing to large language model development.
Papers
October 15, 2024
September 4, 2024
August 18, 2024
July 10, 2024
May 22, 2024
April 17, 2024
April 15, 2024
April 11, 2024
April 9, 2024
March 7, 2024
January 24, 2024
January 3, 2024
December 1, 2023
November 30, 2023
October 29, 2023
October 17, 2023
October 1, 2023
September 3, 2023
July 13, 2023