Paper ID: 2407.15738

Parallel Split Learning with Global Sampling

Mohammad Kohankhaki, Ahmad Ayad, Mahdi Barhoush, Anke Schmeink

The expansion of IoT devices and the demands of deep learning have highlighted significant challenges in distributed deep learning systems. Parallel split learning has emerged as a promising derivative of split learning well suited for distributed learning on resource-constrained devices. However, parallel split learning faces several challenges, such as large effective batch sizes, non-independent and identically distributed data, and the straggler effect. We view these issues as a sampling dilemma and propose to address them by orchestrating a mini-batch sampling process on the server side. We introduce a new method called uniform global sampling to decouple the effective batch size from the number of clients and reduce the mini-batch deviation. To address the straggler effect, we introduce a novel method called Latent Dirichlet Sampling, which generalizes uniform global sampling to balance the trade-off between batch deviation and training time. Our simulations reveal that our proposed methods enhance model accuracy by up to 34.1% in non-independent and identically distributed settings and reduce the training time in the presence of stragglers by up to 62%. In particular, Latent Dirichlet Sampling effectively mitigates the straggler effect without compromising model accuracy or adding significant computational overhead compared to uniform global sampling. Our results demonstrate the potential of our methods to mitigate common challenges in parallel split learning.

Submitted: Jul 22, 2024