Decentralized Deep Learning
Decentralized deep learning focuses on training deep neural networks across multiple devices or nodes without a central server, aiming to improve efficiency, privacy, and robustness. Current research emphasizes addressing challenges like heterogeneous data distributions and high communication costs through novel algorithms such as sharpness-aware minimization, iterative merging-and-training, and adaptive consensus step-size adjustments, often incorporating techniques like momentum and communication compression. This field is significant for enabling large-scale training with privacy-preserving data handling and facilitating applications in resource-constrained environments like edge computing and federated learning.
Papers
May 22, 2024
April 11, 2024
April 9, 2024
October 24, 2023
September 21, 2023
June 24, 2023
June 22, 2023
June 21, 2023
June 14, 2023
February 22, 2023
February 2, 2023
September 30, 2022