Leakage Attack
Leakage attacks exploit vulnerabilities in machine learning systems to extract sensitive information from training data or model parameters, undermining privacy protections. Current research focuses on developing and mitigating these attacks across various domains, including federated learning (using methods like gradient inversion and linear layer leakage), language models (analyzing PII leakage and the effectiveness of unlearning), and code assistants (assessing and reducing code exposure). Understanding and addressing leakage attacks is crucial for ensuring the responsible development and deployment of machine learning, particularly in sensitive applications like healthcare and finance, where data privacy is paramount.
Papers
November 5, 2024
September 19, 2024
September 12, 2024
September 7, 2024
July 27, 2024
June 19, 2024
April 13, 2024
March 26, 2024
March 15, 2024
January 29, 2024
December 18, 2023
August 30, 2023
March 27, 2023
February 1, 2023
December 5, 2022
October 4, 2022
September 21, 2022
May 27, 2022
May 13, 2022