Decentralized Training
Decentralized training focuses on collaboratively training machine learning models across multiple devices or agents without relying on a central server, prioritizing data privacy and scalability. Current research emphasizes efficient algorithms like federated learning and its variants (e.g., asynchronous and personalized federated learning), addressing challenges such as data heterogeneity, communication overhead, and robustness to adversarial attacks. This approach is significant for enabling large-scale model training with privacy preservation and for deploying AI in resource-constrained or distributed environments, impacting fields ranging from mobile computing to large language model development.
Papers
May 24, 2023
May 20, 2023
May 9, 2023
May 4, 2023
September 20, 2022
September 2, 2022
June 2, 2022
April 27, 2022
February 3, 2022