Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Vision Through the Veil: Differential Privacy in Federated Learning for Medical Image Classification
Kishore Babu Nampalle, Pradeep Singh, Uppala Vivek Narayan, Balasubramanian Raman
Differential Privacy May Have a Potential Optimization Effect on Some Swarm Intelligence Algorithms besides Privacy-preserving
Zhiqiang Zhang, Hong Zhu, Meiyi Xie
Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile
Tyler LeBlond, Joseph Munoz, Fred Lu, Maya Fuchs, Elliott Zaresky-Williams, Edward Raff, Brian Testa
Differentially Private Video Activity Recognition
Zelun Luo, Yuliang Zou, Yijin Yang, Zane Durante, De-An Huang, Zhiding Yu, Chaowei Xiao, Li Fei-Fei, Animashree Anandkumar
Optimal Differentially Private Model Training with Public Data
Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn
About the Cost of Central Privacy in Density Estimation
Clément Lalanne (ENS de Lyon, OCKHAM), Aurélien Garivier (UMPA-ENSL, MC2), Rémi Gribonval (OCKHAM)
Practical Privacy-Preserving Gaussian Process Regression via Secret Sharing
Jinglong Luo, Yehong Zhang, Jiaqi Zhang, Shuang Qin, Hui Wang, Yue Yu, Zenglin Xu