Parallel Split Learning
Parallel split learning (PSL) is a distributed deep learning approach designed to train large neural networks on resource-constrained devices by splitting the model across multiple clients and a central server. Current research focuses on optimizing PSL's efficiency through improved resource allocation strategies, novel sampling techniques to handle data heterogeneity and straggler effects, and exploring different model architectures like U-shaped networks to enhance privacy. These advancements aim to reduce training time and improve accuracy, making PSL a valuable tool for federated learning scenarios involving numerous resource-limited devices, such as in IoT applications.
Papers
July 22, 2024
February 1, 2024
January 31, 2024
August 17, 2023