Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
Pre-training Differentially Private Models with Limited Public Data
Zhiqi Bu, Xinwei Zhang, Mingyi Hong, Sheng Zha, George Karypis
GraphPub: Generation of Differential Privacy Graph with High Availability
Wanghan Xu, Bin Shi, Ao Liu, Jiqiang Zhang, Bo Dong
Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries
Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Uri Stemmer
Auditable Homomorphic-based Decentralized Collaborative AI with Attribute-based Differential Privacy
Lo-Yao Yeh, Sheng-Po Tseng, Chia-Hsun Lu, Chih-Ya Shen
Differential Private Federated Transfer Learning for Mental Health Monitoring in Everyday Settings: A Case Study on Stress Detection
Ziyu Wang, Zhongqi Yang, Iman Azimi, Amir M. Rahmani
TernaryVote: Differentially Private, Communication Efficient, and Byzantine Resilient Distributed Optimization on Heterogeneous Data
Richeng Jin, Yujie Gu, Kai Yue, Xiaofan He, Zhaoyang Zhang, Huaiyu Dai
Connect the dots: Dataset Condensation, Differential Privacy, and Adversarial Uncertainty
Kenneth Odoh