Decentralized FL
Decentralized federated learning (DFL) aims to collaboratively train machine learning models across multiple devices or agents without relying on a central server, enhancing data privacy and scalability. Current research focuses on addressing challenges like robustness to malicious actors (Byzantine attacks), handling data and model heterogeneity across participating entities, and improving communication efficiency through techniques such as one-bit compressive sensing and optimized consensus algorithms. DFL's significance lies in its potential to enable large-scale collaborative learning in privacy-sensitive applications and resource-constrained environments, impacting diverse fields from healthcare and finance to robotics and IoT.
Papers
Decentralized and Asymmetric Multi-Agent Learning in Construction Sites
Yakov Miron, Dan Navon, Yuval Goldfracht, Dotan Di Castro, Itzik Klein
Escaping Local Minima: Hybrid Artificial Potential Field with Wall-Follower for Decentralized Multi-Robot Navigation
Joonkyung Kim, Sangjin Park, Wonjong Lee, Woojun Kim, Nakju Doh, Changjoo Nam
Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees
Shahryar Zehtabi, Dong-Jun Han, Rohit Parasnis, Seyyedali Hosseinalipour, Christopher G. Brinton
Decentralized Event-Triggered Online Learning for Safe Consensus of Multi-Agent Systems with Gaussian Process Regression
Xiaobing Dai, Zewen Yang, Mengtian Xu, Fangzhou Liu, Georges Hattab, Sandra Hirche