Split Federated Learning

Split Federated Learning (SFL) is a distributed machine learning approach that divides a model between clients and a server to collaboratively train a shared model while minimizing individual client computational burdens and preserving data privacy. Current research focuses on addressing challenges like asynchronous communication, adversarial attacks (especially on large language models), and data heterogeneity through techniques such as generative activation-aided updates, jamming-resilient frameworks, and adaptive model splitting strategies. SFL's significance lies in its potential to enable efficient and privacy-preserving training of large models on resource-constrained devices, with applications ranging from vehicular edge computing to continuous authentication and medical image analysis.

Papers