Inversion Attack

Inversion attacks exploit machine learning models to reconstruct sensitive training data, undermining privacy in applications ranging from facial recognition to federated learning. Current research focuses on developing and evaluating these attacks against various model architectures, including diffusion models, large language models, and neuromorphic networks, and exploring defenses such as sparse coding, gradient mixing, and feature obfuscation. Understanding and mitigating inversion attacks is crucial for ensuring the responsible deployment of machine learning systems and protecting user privacy in numerous applications.

Papers