Private Gradient
Private gradient methods aim to train machine learning models while preserving the privacy of individual data points, primarily by adding noise to gradients during optimization. Current research focuses on improving the accuracy of differentially private stochastic gradient descent (DPSGD) and related algorithms, exploring techniques like Kalman filtering for noise reduction, gradient shuffling, and novel optimization approaches for specific model architectures (e.g., transformers, ReLU networks). These advancements are crucial for enabling the responsible use of machine learning with sensitive data in various applications, addressing the critical trade-off between privacy guarantees and model utility.
30papers
Papers
February 13, 2025
February 5, 2025
January 6, 2025
December 22, 2024
November 4, 2024
October 18, 2024
October 4, 2024
June 4, 2024
February 21, 2024
January 22, 2024
November 7, 2023