Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Do Gradient Inversion Attacks Make Federated Learning Unsafe?
Ali Hatamizadeh, Hongxu Yin, Pavlo Molchanov, Andriy Myronenko, Wenqi Li, Prerna Dogra, Andrew Feng, Mona G. Flores, Jan Kautz, Daguang Xu, Holger R. Roth
NeuroMixGDP: A Neural Collapse-Inspired Random Mixup for Private Data Release
Donghao Li, Yang Cao, Yuan Yao
Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning
Alberto Bietti, Chen-Yu Wei, Miroslav Dudík, John Langford, Zhiwei Steven Wu
Backpropagation Clipping for Deep Learning with Differential Privacy
Timothy Stevens, Ivoline C. Ngong, David Darais, Calvin Hirsch, David Slater, Joseph P. Near
Differential Private Knowledge Transfer for Privacy-Preserving Cross-Domain Recommendation
Chaochao Chen, Huiwen Wu, Jiajie Su, Lingjuan Lyu, Xiaolin Zheng, Li Wang
Efficient Privacy Preserving Logistic Regression for Horizontally Distributed Data
Guanhong Miao
Training Differentially Private Models with Secure Multiparty Computation
Sikha Pentyala, Davis Railsback, Ricardo Maia, Rafael Dowsley, David Melanson, Anderson Nascimento, Martine De Cock
Differentially Private Graph Classification with GNNs
Tamara T. Mueller, Johannes C. Paetzold, Chinmay Prabhakar, Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
Bounding Training Data Reconstruction in Private (Deep) Learning
Chuan Guo, Brian Karrer, Kamalika Chaudhuri, Laurens van der Maaten
Toward Training at ImageNet Scale with Differential Privacy
Alexey Kurakin, Shuang Song, Steve Chien, Roxana Geambasu, Andreas Terzis, Abhradeep Thakurta
Transfer Learning In Differential Privacy's Hybrid-Model
Refael Kohen, Or Sheffet