Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Unified Enhancement of Privacy Bounds for Mixture Mechanisms via $f$-Differential Privacy
Chendi Wang, Buxin Su, Jiayuan Ye, Reza Shokri, Weijie J. Su
Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification
Yiwei Li, Chien-Wei Huang, Shuai Wang, Chong-Yung Chi, Tony Q. S. Quek
Privacy-Preserving Federated Learning over Vertically and Horizontally Partitioned Data for Financial Anomaly Detection
Swanand Ravindra Kadhe, Heiko Ludwig, Nathalie Baracaldo, Alan King, Yi Zhou, Keith Houck, Ambrish Rawat, Mark Purcell, Naoise Holohan, Mikio Takeuchi, Ryo Kawahara, Nir Drucker, Hayim Shaul, Eyal Kushnir, Omri Soceanu
Mean Estimation Under Heterogeneous Privacy Demands
Syomantak Chaudhuri, Konstantin Miagkov, Thomas A. Courtade
Conditional Density Estimations from Privacy-Protected Data
Yifei Xiong, Nianqiao P. Ju, Sanguo Zhang
PrivImage: Differentially Private Synthetic Image Generation using Diffusion Models with Semantic-Aware Pretraining
Kecen Li, Chen Gong, Zhixiang Li, Yuzhong Zhao, Xinwen Hou, Tianhao Wang