Local Heterogeneous Model
Local heterogeneous models in federated learning address the challenge of training collaboratively on decentralized data with varying client resources and data distributions. Current research focuses on developing algorithms that enable efficient knowledge sharing between a global model and diverse client-specific models, often employing techniques like feature extraction sharing, mixture-of-experts, and low-rank adaptation (LoRA). These advancements aim to improve model accuracy and personalization while minimizing communication and computational overhead, thereby enhancing the practicality and scalability of federated learning across diverse environments. The resulting improvements in efficiency and accuracy have significant implications for privacy-preserving machine learning applications.