Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models
Haoran Li, Dadi Guo, Donghao Li, Wei Fan, Qi Hu, Xin Liu, Chunkit Chan, Duanyi Yao, Yuan Yao, Yangqiu Song
Causal Discovery Under Local Privacy
Rūta Binkytė, Carlos Pinzón, Szilvia Lestyán, Kangsoo Jung, Héber H. Arcolezi, Catuscia Palamidessi
An Examination of the Alleged Privacy Threats of Confidence-Ranked Reconstruction of Census Microdata
David Sánchez, Najeeb Jebreel, Krishnamurty Muralidhar, Josep Domingo-Ferrer, Alberto Blanco-Justicia
SoK: Memorisation in machine learning
Dmitrii Usynin, Moritz Knolle, Georgios Kaissis
DP-DCAN: Differentially Private Deep Contrastive Autoencoder Network for Single-cell Clustering
Huifa Li, Jie Fu, Zhili Chen, Xiaomin Yang, Haitao Liu, Xinpeng Ling
Unified Enhancement of Privacy Bounds for Mixture Mechanisms via $f$-Differential Privacy
Chendi Wang, Buxin Su, Jiayuan Ye, Reza Shokri, Weijie J. Su
Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification
Yiwei Li, Chien-Wei Huang, Shuai Wang, Chong-Yung Chi, Tony Q. S. Quek
Privacy-Preserving Federated Learning over Vertically and Horizontally Partitioned Data for Financial Anomaly Detection
Swanand Ravindra Kadhe, Heiko Ludwig, Nathalie Baracaldo, Alan King, Yi Zhou, Keith Houck, Ambrish Rawat, Mark Purcell, Naoise Holohan, Mikio Takeuchi, Ryo Kawahara, Nir Drucker, Hayim Shaul, Eyal Kushnir, Omri Soceanu
Mean Estimation Under Heterogeneous Privacy Demands
Syomantak Chaudhuri, Konstantin Miagkov, Thomas A. Courtade
Conditional Density Estimations from Privacy-Protected Data
Yifei Xiong, Nianqiao P. Ju, Sanguo Zhang
PrivImage: Differentially Private Synthetic Image Generation using Diffusion Models with Semantic-Aware Pretraining
Kecen Li, Chen Gong, Zhixiang Li, Yuzhong Zhao, Xinwen Hou, Tianhao Wang