Differentially Private
Differential privacy (DP) is a rigorous framework for training machine learning models on sensitive data while guaranteeing individual privacy. Current research focuses on improving the accuracy of DP models, particularly for large-scale models like vision transformers and deep neural networks, often employing techniques like gradient clipping, noise addition, and advanced optimization methods such as momentum aggregation and sharpness-aware minimization. These advancements aim to mitigate the inherent trade-off between privacy and utility, enabling the responsible use of sensitive data in various applications, from healthcare to federated learning. The field is also actively exploring theoretical connections between DP and other desirable properties like robustness and generalization.
Papers
Private graphon estimation via sum-of-squares
Hongjie Chen, Jingqiu Ding, Tommaso d'Orsi, Yiding Hua, Chih-Hung Liu, David Steurer
Smooth Sensitivity for Learning Differentially-Private yet Accurate Rule Lists
Timothée Ly (LAAS-ROC), Julien Ferry (EPM), Marie-José Huguet (LAAS-ROC), Sébastien Gambs (UQAM), Ulrich Aivodji (ETS)