Private Gradient
Private gradient methods aim to train machine learning models while preserving the privacy of individual data points, primarily by adding noise to gradients during optimization. Current research focuses on improving the accuracy of differentially private stochastic gradient descent (DPSGD) and related algorithms, exploring techniques like Kalman filtering for noise reduction, gradient shuffling, and novel optimization approaches for specific model architectures (e.g., transformers, ReLU networks). These advancements are crucial for enabling the responsible use of machine learning with sensitive data in various applications, addressing the critical trade-off between privacy guarantees and model utility.
Papers
November 4, 2024
October 24, 2024
October 18, 2024
October 4, 2024
July 22, 2024
July 5, 2024
June 4, 2024
February 21, 2024
January 22, 2024
December 13, 2023
November 7, 2023
October 30, 2023
October 27, 2023
October 24, 2023
October 14, 2023
October 2, 2023
September 22, 2023
August 29, 2023
August 23, 2023