Dp SGD
Differentially Private Stochastic Gradient Descent (DP-SGD) is a technique for training machine learning models while preserving the privacy of the training data, primarily by adding noise to gradients during training. Current research focuses on improving the accuracy and efficiency of DP-SGD, exploring various noise addition strategies, gradient clipping methods (including pre-projection and batch clipping), and sampling techniques (like shuffling and Poisson subsampling) to optimize the privacy-utility trade-off. This work also investigates different model architectures, such as Kolmogorov-Arnold Networks, and explores post-training noise addition as an alternative approach. The ultimate goal is to enable the development and deployment of accurate and privacy-preserving machine learning models across various applications.