Federated Split Learning

Federated Split Learning (FSL) is a distributed machine learning paradigm aiming to improve the efficiency and privacy of training large models across multiple devices with limited resources. Current research focuses on optimizing model splitting strategies for various architectures, including large language models and vision transformers, often incorporating techniques like momentum alignment and auxiliary networks to reduce communication overhead and enhance model accuracy. This approach is particularly significant for applications like personalized AI agents and resource-constrained IoT environments, offering a balance between data privacy and computational efficiency compared to traditional centralized or fully federated learning methods.

Papers