Local Training

Local training in machine learning focuses on performing multiple local updates to a model on individual devices before communicating with a central server, aiming to reduce communication costs and improve efficiency in distributed settings like federated learning. Current research emphasizes techniques like low-rank adaptation, model compression (sparsification and quantization), and adaptive optimization methods (e.g., Adam-based approaches) to enhance the effectiveness of local training, particularly for large language models and in handling data heterogeneity across devices. This approach is significant for improving the scalability and privacy of federated learning, impacting both the development of more efficient algorithms and the practical deployment of machine learning models in resource-constrained environments.

Papers