Private Gradient

Private gradient methods aim to train machine learning models while preserving the privacy of individual data points, primarily by adding noise to gradients during optimization. Current research focuses on improving the accuracy of differentially private stochastic gradient descent (DPSGD) and related algorithms, exploring techniques like Kalman filtering for noise reduction, gradient shuffling, and novel optimization approaches for specific model architectures (e.g., transformers, ReLU networks). These advancements are crucial for enabling the responsible use of machine learning with sensitive data in various applications, addressing the critical trade-off between privacy guarantees and model utility.

Papers