Private Stochastic
Private stochastic optimization aims to train machine learning models while guaranteeing differential privacy, protecting individual data points within training datasets. Current research focuses on improving the efficiency and accuracy of differentially private stochastic gradient descent (DP-SGD) and its variants, exploring techniques like Kalman filtering, low-pass filtering, and adaptive clipping to mitigate the performance degradation caused by noise injection. These advancements are crucial for enabling the responsible use of large datasets in sensitive applications like healthcare and finance, where privacy is paramount, and are driving improvements in both theoretical privacy guarantees and practical training efficiency.
Papers
November 27, 2023
November 24, 2023
November 20, 2023
November 12, 2023
October 30, 2023
October 12, 2023
July 19, 2023
July 1, 2023
June 16, 2023
June 14, 2023
April 15, 2023
March 2, 2023
February 24, 2023
February 8, 2023
January 30, 2023
November 21, 2022
November 14, 2022
October 18, 2022
October 15, 2022