Federated Learning
Federated learning (FL) is a decentralized machine learning approach enabling collaborative model training across multiple devices without directly sharing their data, thereby preserving privacy. Current research focuses on addressing challenges like data heterogeneity (non-IID data), communication efficiency (e.g., using scalar updates or spiking neural networks), and robustness to adversarial attacks or concept drift, often employing techniques such as knowledge distillation, James-Stein estimators, and adaptive client selection. FL's significance lies in its potential to unlock the power of massive, distributed datasets for training sophisticated models while adhering to privacy regulations and ethical considerations, with applications spanning healthcare, IoT, and other sensitive domains.
Papers
FLAME: Adaptive and Reactive Concept Drift Mitigation for Federated Learning Deployments
Ioannis Mavromatis, Stefano De Feo, Aftab Khan
A Federated Learning Platform as a Service for Advancing Stroke Management in European Clinical Centers
Diogo Reis Santos, Albert Sund Aillet, Antonio Boiano, Usevalad Milasheuski, Lorenzo Giusti, Marco Di Gennaro, Sanaz Kianoush, Luca Barbieri, Monica Nicoli, Michele Carminati, Alessandro E. C. Redondi, Stefano Savazzi, Luigi Serio
Comments on "Privacy-Enhanced Federated Learning Against Poisoning Adversaries"
Thomas Schneider, Ajith Suresh, Hossein Yalame
Leveraging Pre-trained Models for Robust Federated Learning for Kidney Stone Type Recognition
Ivan Reyes-Amezcua, Michael Rojas-Ruiz, Gilberto Ochoa-Ruiz, Andres Mendez-Vazquez, Christian Daul