Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Bounding Membership Inference
Anvith Thudi, Ilia Shumailov, Franziska Boenisch, Nicolas Papernot
Debugging Differential Privacy: A Case Study for Privacy Auditing
Florian Tramer, Andreas Terzis, Thomas Steinke, Shuang Song, Matthew Jagielski, Nicholas Carlini
How reparametrization trick broke differentially-private text representation learning
Ivan Habernal
Differentially Private Estimation of Heterogeneous Causal Effects
Fengshi Niu, Harsha Nori, Brian Quistorff, Rich Caruana, Donald Ngwe, Aadharsh Kannan
Quantum Differential Privacy: An Information Theory Perspective
Christoph Hirche, Cambyse Rouzé, Daniel Stilck França
Differential Secrecy for Distributed Data and Applications to Robust Differentially Secure Vector Summation
Kunal Talwar
Improved Differential Privacy for SGD via Optimal Private Linear Operators on Adaptive Streams
Sergey Denisov, Brendan McMahan, Keith Rush, Adam Smith, Abhradeep Guha Thakurta
Contextualize differential privacy in image database: a lightweight image differential privacy approach based on principle component analysis inverse
Shiliang Zhang, Xuehui Ma, Hui Cao, Tengyuan Zhao, Yajie Yu, Zhuzhu Wang
Differential Privacy and Fairness in Decisions and Learning Tasks: A Survey
Ferdinando Fioretto, Cuong Tran, Pascal Van Hentenryck, Keyu Zhu