Inversion Attack
Inversion attacks exploit machine learning models to reconstruct sensitive training data, undermining privacy in applications ranging from facial recognition to federated learning. Current research focuses on developing and evaluating these attacks against various model architectures, including diffusion models, large language models, and neuromorphic networks, and exploring defenses such as sparse coding, gradient mixing, and feature obfuscation. Understanding and mitigating inversion attacks is crucial for ensuring the responsible deployment of machine learning systems and protecting user privacy in numerous applications.
Papers
November 6, 2024
September 17, 2024
August 21, 2024
July 3, 2024
June 12, 2024
June 3, 2024
May 30, 2024
March 21, 2024
February 1, 2024
January 22, 2024
January 14, 2024
August 22, 2023
May 4, 2023
April 22, 2023
August 12, 2022