Distributed Learning
Distributed learning aims to train machine learning models across multiple devices, improving efficiency and scalability for large datasets and complex models. Current research focuses on mitigating challenges like communication overhead (through gradient compression and efficient synchronization architectures like Ring-AllReduce), handling node failures (using dynamic weighting strategies), and ensuring robustness against adversarial attacks or faulty nodes. This field is crucial for advancing deep learning applications in resource-constrained environments (e.g., IoT, mobile computing) and for enabling collaborative learning across distributed data sources while addressing privacy concerns.
Papers
December 20, 2023
November 13, 2023
October 22, 2023
October 13, 2023
September 8, 2023
August 20, 2023
July 11, 2023
June 29, 2023
May 29, 2023
May 7, 2023
April 27, 2023
April 26, 2023
April 11, 2023
March 23, 2023
March 2, 2023
February 27, 2023
February 21, 2023
February 20, 2023
February 9, 2023