Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Protecting Data from all Parties: Combining FHE and DP in Federated Learning
Arnaud Grivet Sébert, Renaud Sirdey, Oana Stan, Cédric Gouy-Pailler
SmoothNets: Optimizing CNN architecture design for differentially private deep learning
Nicolas W. Remerscheid, Alexander Ziller, Daniel Rueckert, Georgios Kaissis
Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data
Joceline Ziegler, Bjarne Pfitzner, Heinrich Schulz, Axel Saalbach, Bert Arnrich
LPGNet: Link Private Graph Networks for Node Classification
Aashish Kolluri, Teodora Baluta, Bryan Hooi, Prateek Saxena
Large Scale Transfer Learning for Differentially Private Image Classification
Harsh Mehta, Abhradeep Thakurta, Alexey Kurakin, Ashok Cutkosky
A New Dimensionality Reduction Method Based on Hensel's Compression for Privacy Protection in Federated Learning
Ahmed El Ouadrhiri, Ahmed Abdelhadi
Differentially Private Multivariate Time Series Forecasting of Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation?
Héber H. Arcolezi, Jean-François Couchot, Denis Renaud, Bechara Al Bouna, Xiaokui Xiao