Private Optimization
Private optimization aims to train machine learning models while guaranteeing the privacy of individual data points, typically using differential privacy mechanisms. Current research focuses on improving the efficiency and accuracy of differentially private optimizers, such as variants of stochastic gradient descent (SGD), by incorporating techniques like Kalman filtering, low-pass filtering, and adaptive preconditioners to mitigate the negative impact of added noise. These advancements are crucial for deploying machine learning in sensitive domains like healthcare and finance, where data privacy is paramount, and are driving progress in both theoretical understanding and practical applications of differential privacy.
Papers
October 10, 2022
September 30, 2022
September 15, 2022
July 21, 2022
July 18, 2022
March 13, 2022
February 12, 2022