Decentralized FL
Decentralized federated learning (DFL) aims to collaboratively train machine learning models across multiple devices or agents without relying on a central server, enhancing data privacy and scalability. Current research focuses on addressing challenges like robustness to malicious actors (Byzantine attacks), handling data and model heterogeneity across participating entities, and improving communication efficiency through techniques such as one-bit compressive sensing and optimized consensus algorithms. DFL's significance lies in its potential to enable large-scale collaborative learning in privacy-sensitive applications and resource-constrained environments, impacting diverse fields from healthcare and finance to robotics and IoT.
Papers
RLSS: Real-time, Decentralized, Cooperative, Networkless Multi-Robot Trajectory Planning using Linear Spatial Separations
Baskın Şenbaşlar, Wolfgang Hönig, Nora Ayanian
Towards Computationally Efficient Responsibility Attribution in Decentralized Partially Observable MDPs
Stelios Triantafyllou, Goran Radanovic