Heterogeneous Federated Learning

Heterogeneous federated learning (HFL) addresses the challenge of collaboratively training machine learning models across diverse devices with varying data distributions, computational capabilities, and model architectures, all while preserving data privacy. Current research focuses on developing robust aggregation techniques, including personalized model training and adaptive methods to handle skewed data, often employing techniques like knowledge distillation, prompt tuning, and prototype-based approaches. These advancements are crucial for enabling practical applications of federated learning in resource-constrained environments and scenarios with highly non-IID data, such as healthcare and IoT applications, where data heterogeneity is prevalent. The ultimate goal is to improve the accuracy, efficiency, and fairness of federated learning in real-world settings.

Papers