Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Privacy-Preserving Matrix Factorization for Recommendation Systems using Gaussian Mechanism
Sohan Salahuddin Mugdho, Hafiz Imtiaz
RecUP-FL: Reconciling Utility and Privacy in Federated Learning via User-configurable Privacy Defense
Yue Cui, Syed Irfan Ali Meerza, Zhuohang Li, Luyang Liu, Jiaxin Zhang, Jian Liu
Balancing Privacy and Performance for Private Federated Learning Algorithms
Xiangjian Hou, Sarit Khirirat, Mohammad Yaqub, Samuel Horvath
FedBot: Enhancing Privacy in Chatbots with Federated Learning
Addi Ait-Mlouk, Sadi Alawadi, Salman Toor, Andreas Hellander
Privacy Amplification via Compression: Achieving the Optimal Privacy-Accuracy-Communication Trade-off in Distributed Mean Estimation
Wei-Ning Chen, Dan Song, Ayfer Ozgur, Peter Kairouz
Have it your way: Individualized Privacy Assignment for DP-SGD
Franziska Boenisch, Christopher Mühl, Adam Dziedzic, Roy Rinberg, Nicolas Papernot
TraVaG: Differentially Private Trace Variant Generation Using GANs
Majid Rafiei, Frederik Wangelik, Mahsa Pourbafrani, Wil M. P. van der Aalst
Federated Learning in MIMO Satellite Broadcast System
Raphael Pinard, Mitra Hassani, Wayne Lemieux