Distributed Learning
Distributed learning aims to train machine learning models across multiple devices, improving efficiency and scalability for large datasets and complex models. Current research focuses on mitigating challenges like communication overhead (through gradient compression and efficient synchronization architectures like Ring-AllReduce), handling node failures (using dynamic weighting strategies), and ensuring robustness against adversarial attacks or faulty nodes. This field is crucial for advancing deep learning applications in resource-constrained environments (e.g., IoT, mobile computing) and for enabling collaborative learning across distributed data sources while addressing privacy concerns.
Papers
October 28, 2022
September 18, 2022
August 28, 2022
August 17, 2022
June 28, 2022
June 8, 2022
June 6, 2022
May 2, 2022
April 13, 2022
April 1, 2022
December 21, 2021
December 8, 2021
December 3, 2021