Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Exploratory Analysis of Federated Learning Methods with Differential Privacy on MIMIC-III
Aron N. Horvath, Matteo Berchier, Farhad Nooralahzadeh, Ahmed Allam, Michael Krauthammer
DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning
Tomoya Murata, Taiji Suzuki
Gradient Descent with Linearly Correlated Noise: Theory and Applications to Differential Privacy
Anastasia Koloskova, Ryan McKenna, Zachary Charles, Keith Rush, Brendan McMahan
FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations
Hui-Po Wang, Dingfan Chen, Raouf Kerkouche, Mario Fritz