Differential Privacy SGD

Differential Privacy Stochastic Gradient Descent (DP-SGD) aims to train machine learning models while guaranteeing differential privacy, protecting individual data points' privacy. Current research focuses on improving DP-SGD's efficiency and accuracy, particularly for large language models and deep learning architectures, exploring techniques like user-level privacy, adaptive clipping, and alternative optimization methods beyond standard SGD. These advancements address the significant computational cost and utility loss often associated with DP training, enabling the practical application of privacy-preserving machine learning in sensitive domains. The field is also actively investigating robust privacy accounting methods and developing tools for evaluating and auditing the privacy guarantees of trained models.

Papers