Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Algorithms with More Granular Differential Privacy Guarantees
Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Thomas Steinke
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie, Yunhui Long, Pin-Yu Chen, Qinbin Li, Arash Nourian, Sanmi Koyejo, Bo Li
DPAUC: Differentially Private AUC Computation in Federated Learning
Jiankai Sun, Xin Yang, Yuanshun Yao, Junyuan Xie, Di Wu, Chong Wang
On Differential Privacy for Federated Learning in Wireless Systems with Multiple Base Stations
Nima Tavangaran, Mingzhe Chen, Zhaohui Yang, José Mairton B. Da Silva, H. Vincent Poor