Federated Learning
Federated learning (FL) is a decentralized machine learning approach enabling collaborative model training across multiple devices without directly sharing their data, thereby preserving privacy. Current research focuses on addressing challenges like data heterogeneity (non-IID data), communication efficiency (e.g., using scalar updates or spiking neural networks), and robustness to adversarial attacks or concept drift, often employing techniques such as knowledge distillation, James-Stein estimators, and adaptive client selection. FL's significance lies in its potential to unlock the power of massive, distributed datasets for training sophisticated models while adhering to privacy regulations and ethical considerations, with applications spanning healthcare, IoT, and other sensitive domains.
Papers
ProFL: Performative Robust Optimal Federated Learning
Xue Zheng, Tian Xie, Xuwei Tan, Aylin Yener, Xueru Zhang, Ali Payani, Myungjin Lee
Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data
Zhaomin Wu, Junyi Hou, Yiqun Diao, Bingsheng He
Securing Federated Learning Against Novel and Classic Backdoor Threats During Foundation Model Integration
Xiaohuan Bi, Xi Li