Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Efficient Differentially Private Fine-Tuning of Diffusion Models
Jing Liu, Andrew Lowy, Toshiaki Koike-Akino, Kieran Parsons, Ye Wang
Perturb-and-Project: Differentially Private Similarities and Marginals
Vincent Cohen-Addad, Tommaso d'Orsi, Alessandro Epasto, Vahab Mirrokni, Peilin Zhong
Black Box Differential Privacy Auditing Using Total Variation Distance
Antti Koskela, Jafar Mohammadi
Contrastive explainable clustering with differential privacy
Dung Nguyen, Ariel Vetzler, Sarit Kraus, Anil Vullikanti
Auditing Privacy Mechanisms via Label Inference Attacks
Róbert István Busa-Fekete, Travis Dick, Claudio Gentile, Andrés Muñoz Medina, Adam Smith, Marika Swanberg
Synthetic Data Outliers: Navigating Identity Disclosure
Carolina Trindade, Luís Antunes, Tânia Carvalho, Nuno Moniz
Optimality of Matrix Mechanism on $\ell_p^p$-metric
Jingcheng Liu, Jalaj Upadhyay, Zongrui Zou
Differentially Private Tabular Data Synthesis using Large Language Models
Toan V. Tran, Li Xiong
Differentially Private Fine-Tuning of Diffusion Models
Yu-Lin Tsai, Yizhe Li, Zekai Chen, Po-Yu Chen, Chia-Mu Yu, Xuebin Ren, Francois Buet-Golfouse
Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients
Weijun Li, Qiongkai Xu, Mark Dras
Private Mean Estimation with Person-Level Differential Privacy
Sushant Agarwal, Gautam Kamath, Mahbod Majid, Argyris Mouzakis, Rose Silver, Jonathan Ullman
Robust Kernel Hypothesis Testing under Data Corruption
Antonin Schrab, Ilmun Kim
Just Rewrite It Again: A Post-Processing Method for Enhanced Semantic Similarity and Privacy Preservation of Differentially Private Rewritten Text
Stephen Meisenbacher, Florian Matthes
Mitigating Disparate Impact of Differential Privacy in Federated Learning through Robust Clustering
Saber Malekmohammadi, Afaf Taik, Golnoosh Farnadi
LMO-DP: Optimizing the Randomization Mechanism for Differentially Private Fine-Tuning (Large) Language Models
Qin Yang, Meisam Mohammad, Han Wang, Ali Payani, Ashish Kundu, Kai Shu, Yan Yan, Yuan Hong