Differentially Private Federated Learning

Differentially private federated learning (DPFL) aims to enable collaborative model training across multiple devices without compromising individual data privacy, using differential privacy mechanisms to add noise and protect sensitive information. Current research focuses on improving the accuracy of DPFL models by addressing the trade-off between privacy and utility through techniques like personalized optimization, robust clustering of clients based on model updates and loss, and noise-aware aggregation strategies. These advancements are crucial for deploying privacy-preserving machine learning in real-world applications, particularly in sensitive domains like healthcare and finance, where data sharing is restricted.

Papers