Decentralized Machine
Decentralized machine learning focuses on training models across distributed devices without centralizing data, prioritizing privacy and efficiency. Current research emphasizes developing robust algorithms for decentralized optimization, such as variations of stochastic gradient descent and Bayesian inference, and efficient communication strategies to handle heterogeneous networks and non-identically distributed data. This approach is significant for enabling large-scale machine learning on resource-constrained devices and in privacy-sensitive applications, impacting fields like federated learning and agent-based modeling.
Papers
DIMAT: Decentralized Iterative Merging-And-Training for Deep Learning Models
Nastaran Saadati, Minh Pham, Nasla Saleem, Joshua R. Waite, Aditya Balu, Zhanhong Jiang, Chinmay Hegde, Soumik Sarkar
Bayesian Federated Model Compression for Communication and Computation Efficiency
Chengyu Xia, Danny H. K. Tsang, Vincent K. N. Lau