Private Stochastic Gradient Descent
Private Stochastic Gradient Descent (DP-SGD) aims to train machine learning models on sensitive data while guaranteeing differential privacy, preventing individual data points from being inferred. Current research focuses on improving the accuracy and efficiency of DP-SGD through techniques like gradient shuffling, adaptive noise mechanisms, and optimized clipping strategies, often applied to deep learning models including transformers and convolutional neural networks. These advancements address the inherent trade-off between privacy and model utility, impacting fields like healthcare and finance where data privacy is paramount, by enabling the development of accurate and privacy-preserving machine learning models. Ongoing work also explores decentralized DP-SGD and the development of methods to verify the privacy guarantees of trained models.