Partial Model Training
Partial model training addresses the challenges of training large models on resource-constrained devices, particularly within federated learning frameworks. Current research focuses on adapting model architectures (like CNNs, LSTMs, and Transformers) and algorithms (including federated averaging and asynchronous methods) to allow clients to train only portions of a model, based on their capabilities. This approach improves efficiency and participation rates in federated learning, enabling the training of larger and more complex models on diverse, heterogeneous networks of devices, with applications ranging from speech recognition to graph neural networks.
Papers
June 21, 2024
May 27, 2024
March 12, 2024
November 16, 2023
April 14, 2023
March 31, 2022