Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Privately Learning from Graphs with Applications in Fine-tuning Large Language Models
Haoteng Yin, Rongzhe Wei, Eli Chien, Pan Li
Private Language Models via Truncated Laplacian Mechanism
Tianhao Huang, Tao Yang, Ivan Habernal, Lijie Hu, Di Wang
Adaptive Batch Size for Privately Finding Second-Order Stationary Points
Daogao Liu, Kunal Talwar
Private and Communication-Efficient Federated Learning based on Differentially Private Sketches
Meifan Zhang, Zhanhong Xie, Lihua Yin
KnowledgeSG: Privacy-Preserving Synthetic Text Generation with Knowledge Distillation from Server
Wenhao Wang, Xiaoyu Liang, Rui Ye, Jingyi Chai, Siheng Chen, Yanfeng Wang
Efficient and Private Marginal Reconstruction with Local Non-Negativity
Brett Mullins, Miguel Fuentes, Yingtai Xiao, Daniel Kifer, Cameron Musco, Daniel Sheldon
Convergent Privacy Loss of Noisy-SGD without Convexity and Smoothness
Eli Chien, Pan Li
Thinking Outside of the Differential Privacy Box: A Case Study in Text Privatization with Language Model Prompting
Stephen Meisenbacher, Florian Matthes
Differentially Private Active Learning: Balancing Effective Data Selection and Privacy
Kristian Schwethelm, Johannes Kaiser, Jonas Kuntzer, Mehmet Yigitsoy, Daniel Rueckert, Georgios Kaissis
Federated Online Prediction from Experts with Differential Privacy: Separations and Regret Speed-ups
Fengyu Gao, Ruiquan Huang, Jing Yang
Differential privacy for protecting patient data in speech disorder detection using deep learning
Soroosh Tayebi Arasteh, Mahshad Lotfinia, Paula Andrea Perez-Toro, Tomas Arias-Vergara, Juan Rafael Orozco-Arroyave, Maria Schuster, Andreas Maier, Seung Hee Yang
CURATE: Scaling-up Differentially Private Causal Graph Discovery
Payel Bhattacharjee, Ravi Tandon
Differentially Private Non Parametric Copulas: Generating synthetic data with non parametric copulas under privacy guarantees
Pablo A. Osorio-Marulanda, John Esteban Castro Ramirez, Mikel Hernández Jiménez, Nicolas Moreno Reyes, Gorka Epelde Unanue