Decentralized Training

Decentralized training focuses on collaboratively training machine learning models across multiple devices or agents without relying on a central server, prioritizing data privacy and scalability. Current research emphasizes efficient algorithms like federated learning and its variants (e.g., asynchronous and personalized federated learning), addressing challenges such as data heterogeneity, communication overhead, and robustness to adversarial attacks. This approach is significant for enabling large-scale model training with privacy preservation and for deploying AI in resource-constrained or distributed environments, impacting fields ranging from mobile computing to large language model development.

Papers