Private Gradient
Private gradient methods aim to train machine learning models while preserving the privacy of individual data points, primarily by adding noise to gradients during optimization. Current research focuses on improving the accuracy of differentially private stochastic gradient descent (DPSGD) and related algorithms, exploring techniques like Kalman filtering for noise reduction, gradient shuffling, and novel optimization approaches for specific model architectures (e.g., transformers, ReLU networks). These advancements are crucial for enabling the responsible use of machine learning with sensitive data in various applications, addressing the critical trade-off between privacy guarantees and model utility.
Papers
May 24, 2023
March 31, 2023
February 24, 2023
February 22, 2023
February 8, 2023
January 1, 2023
November 14, 2022
October 31, 2022
October 12, 2022
June 1, 2022
January 10, 2022
December 1, 2021