Distributed Learning
Distributed learning aims to train machine learning models across multiple devices, improving efficiency and scalability for large datasets and complex models. Current research focuses on mitigating challenges like communication overhead (through gradient compression and efficient synchronization architectures like Ring-AllReduce), handling node failures (using dynamic weighting strategies), and ensuring robustness against adversarial attacks or faulty nodes. This field is crucial for advancing deep learning applications in resource-constrained environments (e.g., IoT, mobile computing) and for enabling collaborative learning across distributed data sources while addressing privacy concerns.
Papers
October 28, 2024
October 3, 2024
September 14, 2024
September 8, 2024
July 29, 2024
July 2, 2024
June 28, 2024
May 23, 2024
May 6, 2024
April 9, 2024
March 23, 2024
March 19, 2024
March 8, 2024
February 15, 2024
February 8, 2024
February 2, 2024
January 11, 2024
January 6, 2024