Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual Explanations
Fatima Ezzeddine, Omran Ayoub, Silvia Giordano
A Comparative Analysis of Word-Level Metric Differential Privacy: Benchmarking The Privacy-Utility Trade-off
Stephen Meisenbacher, Nihildev Nandakumar, Alexandra Klymenko, Florian Matthes
Differentially Private Next-Token Prediction of Large Language Models
James Flemings, Meisam Razaviyayn, Murali Annavaram
DP-Dueling: Learning from Preference Feedback without Compromising User Privacy
Aadirupa Saha, Hilal Asi
Adaptive Coded Federated Learning: Privacy Preservation and Straggler Mitigation
Chengxi Li, Ming Xiao, Mikael Skoglund
Privacy Amplification for the Gaussian Mechanism via Bounded Support
Shengyuan Hu, Saeed Mahloujifar, Virginia Smith, Kamalika Chaudhuri, Chuan Guo
Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification
Jan Schuchardt, Mihail Stoian, Arthur Kosmala, Stephan Günnemann
Privacy-preserving Fine-tuning of Large Language Models through Flatness
Tiejin Chen, Longchao Da, Huixue Zhou, Pingzhi Li, Kaixiong Zhou, Tianlong Chen, Hua Wei