Protecting Privacy
Protecting privacy in machine learning involves developing techniques to prevent sensitive data leakage during model training and deployment. Current research focuses on enhancing privacy in federated learning through secure aggregation protocols and local training methods, as well as developing novel approaches for data anonymization and model unlearning, often leveraging techniques like homomorphic encryption and generative adversarial networks. These advancements are crucial for enabling the responsible use of data in various applications, particularly in sensitive domains like healthcare and autonomous systems, while mitigating risks associated with data breaches and model inversion attacks.
Papers
October 29, 2024
September 13, 2024
September 2, 2024
July 3, 2024
July 1, 2024
June 20, 2024
April 26, 2024
April 24, 2024
March 26, 2024
November 30, 2023
October 31, 2023
May 19, 2023
December 20, 2022
April 26, 2022