Client Level Differential Privacy

Client-level differential privacy in federated learning aims to enhance the privacy of individual participants' data during collaborative model training by adding noise to local model updates before sharing them with a central server. Current research focuses on improving the accuracy of models trained under these privacy constraints, exploring techniques like sparsification, variance reduction, and adaptive client selection algorithms to mitigate the negative impact of noise. This area is crucial for enabling secure and privacy-preserving machine learning in sensitive domains like healthcare and finance, where data sharing is restricted but collaborative learning is highly desirable.

Papers