Differentially Private

Differential privacy (DP) is a rigorous framework for training machine learning models on sensitive data while guaranteeing individual privacy. Current research focuses on improving the accuracy of DP models, particularly for large-scale models like vision transformers and deep neural networks, often employing techniques like gradient clipping, noise addition, and advanced optimization methods such as momentum aggregation and sharpness-aware minimization. These advancements aim to mitigate the inherent trade-off between privacy and utility, enabling the responsible use of sensitive data in various applications, from healthcare to federated learning. The field is also actively exploring theoretical connections between DP and other desirable properties like robustness and generalization.

Papers