Leakage Attack
Leakage attacks exploit vulnerabilities in machine learning systems to extract sensitive information from training data or model parameters, undermining privacy protections. Current research focuses on developing and mitigating these attacks across various domains, including federated learning (using methods like gradient inversion and linear layer leakage), language models (analyzing PII leakage and the effectiveness of unlearning), and code assistants (assessing and reducing code exposure). Understanding and addressing leakage attacks is crucial for ensuring the responsible development and deployment of machine learning, particularly in sensitive applications like healthcare and finance, where data privacy is paramount.
Papers
April 17, 2022
December 23, 2021
November 8, 2021