Hierarchical Federated Learning
Hierarchical Federated Learning (HFL) addresses the limitations of traditional federated learning by structuring the learning process across multiple layers, typically involving edge servers and a central server, to improve efficiency and scalability in distributed systems. Current research focuses on optimizing HFL algorithms, such as those based on gradient descent, ADMM, and Bayesian methods, to address challenges like data heterogeneity, communication overhead, and adversarial attacks, often incorporating techniques like model compression and dynamic client selection. This approach holds significant promise for enhancing privacy-preserving machine learning in diverse applications, including autonomous driving, healthcare, and IoT, by enabling collaborative model training across geographically dispersed or resource-constrained devices.