Federated Learning
Federated learning (FL) is a decentralized machine learning approach enabling collaborative model training across multiple devices without directly sharing their data, thereby preserving privacy. Current research focuses on addressing challenges like data heterogeneity (non-IID data), communication efficiency (e.g., using scalar updates or spiking neural networks), and robustness to adversarial attacks or concept drift, often employing techniques such as knowledge distillation, James-Stein estimators, and adaptive client selection. FL's significance lies in its potential to unlock the power of massive, distributed datasets for training sophisticated models while adhering to privacy regulations and ethical considerations, with applications spanning healthcare, IoT, and other sensitive domains.
Papers
Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis
Alexandros Gkillas, Aris Lalos
Cooperation and Personalization on a Seesaw: Choice-based FL for Safe Cooperation in Wireless Networks
Han Zhang, Medhat Elsayed, Majid Bavand, Raimundas Gaigalas, Yigit Ozcan, Melike Erol-Kantarci
Trustworthy Federated Learning: Privacy, Security, and Beyond
Chunlu Chen, Ji Liu, Haowen Tan, Xingjian Li, Kevin I-Kai Wang, Peng Li, Kouichi Sakurai, Dejing Dou
Analysis of regularized federated learning
Langming Liu, Dingxuan Zhou
Efficient and Robust Regularized Federated Recommendation
Langming Liu, Wanyu Wang, Xiangyu Zhao, Zijian Zhang, Chunxu Zhang, Shanru Lin, Yiqi Wang, Lixin Zou, Zitao Liu, Xuetao Wei, Hongzhi Yin, Qing Li
ProFL: Performative Robust Optimal Federated Learning
Xue Zheng, Tian Xie, Xuwei Tan, Aylin Yener, Xueru Zhang, Ali Payani, Myungjin Lee
Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data
Zhaomin Wu, Junyi Hou, Yiqun Diao, Bingsheng He
Securing Federated Learning Against Novel and Classic Backdoor Threats During Foundation Model Integration
Xiaohuan Bi, Xi Li