Differentially Private Federated Learning
Differentially private federated learning (DPFL) aims to enable collaborative model training across multiple devices without compromising individual data privacy, using differential privacy mechanisms to add noise and protect sensitive information. Current research focuses on improving the accuracy of DPFL models by addressing the trade-off between privacy and utility through techniques like personalized optimization, robust clustering of clients based on model updates and loss, and noise-aware aggregation strategies. These advancements are crucial for deploying privacy-preserving machine learning in real-world applications, particularly in sensitive domains like healthcare and finance, where data sharing is restricted.
Papers
October 21, 2024
September 20, 2024
June 5, 2024
May 29, 2024
May 9, 2024
December 27, 2023
May 1, 2023
March 20, 2023
July 24, 2022