Differential Privacy
Differential privacy (DP) is a rigorous framework for ensuring data privacy in machine learning by adding carefully calibrated noise to model training processes. Current research focuses on improving the accuracy of DP models, particularly for large-scale training, through techniques like adaptive noise allocation, Kalman filtering for noise reduction, and novel gradient processing methods. This active area of research is crucial for enabling the responsible use of sensitive data in various applications, ranging from healthcare and finance to natural language processing and smart grids, while maintaining strong privacy guarantees.
Papers
A Hassle-free Algorithm for Private Learning in Practice: Don't Use Tree Aggregation, Use BLTs
H. Brendan McMahan, Zheng Xu, Yanxiang Zhang
A Multivocal Literature Review on Privacy and Fairness in Federated Learning
Beatrice Balbierer, Lukas Heinlein, Domenique Zipperling, Niklas Kühl
Fairness Issues and Mitigations in (Differentially Private) Socio-demographic Data Processes
Joonhyuk Ko, Juba Ziani, Saswat Das, Matt Williams, Ferdinando Fioretto
Differential Privacy of Cross-Attention with Provable Guarantee
Yingyu Liang, Zhenmei Shi, Zhao Song, Yufa Zhou
Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence
Shuya Feng, Meisam Mohammady, Hanbin Hong, Shenao Yan, Ashish Kundu, Binghui Wang, Yuan Hong