Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Explainable Hyperdimensional Computing for Balancing Privacy and Transparency in Additive Manufacturing Monitoring
Fardin Jalil Piran, Prathyush P. Poduval, Hamza Errahmouni Barkam, Mohsen Imani, Farhad Imani
A Differentially Private Blockchain-Based Approach for Vertical Federated Learning
Linh Tran, Sanjay Chari, Md. Saikat Islam Khan, Aaron Zachariah, Stacy Patterson, Oshani Seneviratne
Characterizing Stereotypical Bias from Privacy-preserving Pre-Training
Stefan Arnold, Rene Gröbner, Annika Schreiner
A Collocation-based Method for Addressing Challenges in Word-level Metric Differential Privacy
Stephen Meisenbacher, Maulik Chevli, Florian Matthes
DP-MLM: Differentially Private Text Rewriting Using Masked Language Models
Stephen Meisenbacher, Maulik Chevli, Juraj Vladika, Florian Matthes
A Zero Auxiliary Knowledge Membership Inference Attack on Aggregate Location Data
Vincent Guan, Florent Guépin, Ana-Maria Cretu, Yves-Alexandre de Montjoye
Enhancing Federated Learning with Adaptive Differential Privacy and Priority-Based Aggregation
Mahtab Talaei, Iman Izadi
A Quantization-based Technique for Privacy Preserving Distributed Learning
Maurizio Colombo, Rasool Asal, Ernesto Damiani, Lamees Mahmoud AlQassem, Al Anoud Almemari, Yousof Alhammadi