Model Splitting
Model splitting involves partitioning large machine learning models across multiple devices or servers to overcome resource limitations in edge computing and federated learning. Current research focuses on optimizing model partitioning strategies, developing efficient communication and computation resource allocation algorithms (often employing gradient descent variations), and designing novel model architectures to mitigate challenges like stragglers and data heterogeneity. This approach is significant for enabling the deployment of complex AI models on resource-constrained devices, improving training efficiency in collaborative learning settings, and enhancing privacy in distributed training scenarios.
Papers
October 16, 2024
October 11, 2024
September 25, 2024
July 4, 2024
March 19, 2024
March 12, 2024
February 13, 2024
December 26, 2023
December 14, 2023
November 22, 2023
October 24, 2023
June 19, 2023
April 26, 2023
March 26, 2023
March 16, 2023
September 2, 2022
February 9, 2022