Split Federated Learning
Split Federated Learning (SFL) is a distributed machine learning approach that divides a model between clients and a server to collaboratively train a shared model while minimizing individual client computational burdens and preserving data privacy. Current research focuses on addressing challenges like asynchronous communication, adversarial attacks (especially on large language models), and data heterogeneity through techniques such as generative activation-aided updates, jamming-resilient frameworks, and adaptive model splitting strategies. SFL's significance lies in its potential to enable efficient and privacy-preserving training of large models on resource-constrained devices, with applications ranging from vehicular edge computing to continuous authentication and medical image analysis.
Papers
MergeSFL: Split Federated Learning with Feature Merging and Batch Size Regulation
Yunming Liao, Yang Xu, Hongli Xu, Lun Wang, Zhiwei Yao, Chunming Qiao
Have Your Cake and Eat It Too: Toward Efficient and Accurate Split Federated Learning
Dengke Yan, Ming Hu, Zeke Xia, Yanxin Yang, Jun Xia, Xiaofei Xie, Mingsong Chen