Client Level Differential Privacy
Client-level differential privacy in federated learning aims to enhance the privacy of individual participants' data during collaborative model training by adding noise to local model updates before sharing them with a central server. Current research focuses on improving the accuracy of models trained under these privacy constraints, exploring techniques like sparsification, variance reduction, and adaptive client selection algorithms to mitigate the negative impact of noise. This area is crucial for enabling secure and privacy-preserving machine learning in sensitive domains like healthcare and finance, where data sharing is restricted but collaborative learning is highly desirable.
Papers
September 30, 2024
October 13, 2023
September 7, 2023
August 7, 2023
July 24, 2023
June 14, 2023
May 1, 2023