Federated Distillation

Federated distillation (FD) is a distributed machine learning approach that leverages knowledge distillation to efficiently train models across multiple clients without directly sharing sensitive data. Current research focuses on improving FD's robustness to adversarial attacks, addressing data heterogeneity among clients, and enhancing communication efficiency through techniques like accumulating local updates and selective knowledge sharing. This approach holds significant promise for privacy-preserving collaborative learning in various applications, particularly in resource-constrained environments like edge networks and medical image analysis, by enabling the training of large models without compromising data security.

Papers