Learning Privacy
Learning privacy focuses on developing methods to train and utilize machine learning models while safeguarding sensitive data used in their creation. Current research emphasizes improving the accuracy of privacy-preserving techniques like differentially private stochastic gradient descent (DPSGD), exploring the vulnerabilities of existing defenses through membership inference attacks, and investigating the privacy implications of model explainability. These efforts aim to strike a balance between model utility and data privacy, impacting the responsible development and deployment of AI systems across various sectors, particularly those handling sensitive financial or personal information.
Papers
November 2, 2024
October 8, 2024
July 3, 2024
April 26, 2024
March 31, 2024
March 3, 2023
March 16, 2022