Private Stochastic
Private stochastic optimization aims to train machine learning models while guaranteeing differential privacy, protecting individual data points within training datasets. Current research focuses on improving the efficiency and accuracy of differentially private stochastic gradient descent (DP-SGD) and its variants, exploring techniques like Kalman filtering, low-pass filtering, and adaptive clipping to mitigate the performance degradation caused by noise injection. These advancements are crucial for enabling the responsible use of large datasets in sensitive applications like healthcare and finance, where privacy is paramount, and are driving improvements in both theoretical privacy guarantees and practical training efficiency.
Papers
October 24, 2024
October 4, 2024
September 27, 2024
September 6, 2024
August 24, 2024
August 19, 2024
August 12, 2024
August 11, 2024
July 7, 2024
July 5, 2024
June 27, 2024
June 25, 2024
June 12, 2024
June 4, 2024
April 15, 2024
March 6, 2024
March 5, 2024
February 28, 2024
February 9, 2024