Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Over-the-Air Federated Learning with Privacy Protection via Correlated Additive Perturbations
Jialing Liao, Zheng Chen, Erik G. Larsson
On the Statistical Complexity of Estimation and Testing under Privacy Constraints
Clément Lalanne (DANTE, OCKHAM), Aurélien Garivier (UMPA-ENSL), Rémi Gribonval (DANTE, OCKHAM)
Fine-Tuning with Differential Privacy Necessitates an Additional Hyperparameter Search
Yannis Cattan, Christopher A. Choquette-Choo, Nicolas Papernot, Abhradeep Thakurta