Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM
Chulin Xie, Pin-Yu Chen, Qinbin Li, Arash Nourian, Ce Zhang, Bo Li
Improved Generalization Guarantees in Restricted Data Models
Elbert Du, Cynthia Dwork
FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
Yuanhao Xiong, Ruochen Wang, Minhao Cheng, Felix Yu, Cho-Jui Hsieh
Differentially Private Federated Combinatorial Bandits with Constraints
Sambhav Solanki, Samhita Kanaparthy, Sankarshan Damle, Sujit Gujar
DPOAD: Differentially Private Outsourcing of Anomaly Detection through Iterative Sensitivity Learning
Meisam Mohammady, Han Wang, Lingyu Wang, Mengyuan Zhang, Yosr Jarraya, Suryadipta Majumdar, Makan Pourzandi, Mourad Debbabi, Yuan Hong