Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Private Federated Frequency Estimation: Adapting to the Hardness of the Instance
Jingfeng Wu, Wennan Zhu, Peter Kairouz, Vladimir Braverman
On the resilience of Collaborative Learning-based Recommender Systems Against Community Detection Attack
Yacine Belal, Sonia Ben Mokhtar, Mohamed Maouche, Anthony Simonet-Boulogne
ViP: A Differentially Private Foundation Model for Computer Vision
Yaodong Yu, Maziar Sanjabi, Yi Ma, Kamalika Chaudhuri, Chuan Guo
Preserving privacy in domain transfer of medical AI models comes at no performance costs: The integral role of differential privacy
Soroosh Tayebi Arasteh, Mahshad Lotfinia, Teresa Nolte, Marwin Saehn, Peter Isfort, Christiane Kuhl, Sven Nebelung, Georgios Kaissis, Daniel Truhn
Personalized Graph Federated Learning with Differential Privacy
Francois Gauthier, Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, Anthony Kuh