Private Stochastic
Private stochastic optimization aims to train machine learning models while guaranteeing differential privacy, protecting individual data points within training datasets. Current research focuses on improving the efficiency and accuracy of differentially private stochastic gradient descent (DP-SGD) and its variants, exploring techniques like Kalman filtering, low-pass filtering, and adaptive clipping to mitigate the performance degradation caused by noise injection. These advancements are crucial for enabling the responsible use of large datasets in sensitive applications like healthcare and finance, where privacy is paramount, and are driving improvements in both theoretical privacy guarantees and practical training efficiency.
Papers
September 30, 2022
September 15, 2022
September 9, 2022
September 7, 2022
June 27, 2022
June 21, 2022
June 14, 2022
June 6, 2022
June 2, 2022
May 25, 2022
April 3, 2022
February 10, 2022
February 5, 2022
January 6, 2022
December 15, 2021
December 1, 2021
November 23, 2021