Model Inversion Attack
Model inversion attacks exploit machine learning models to reconstruct sensitive training data, posing a significant privacy risk. Current research focuses on developing and benchmarking increasingly sophisticated attacks, often leveraging generative adversarial networks (GANs) and diffusion models, while simultaneously exploring diverse defense mechanisms such as data augmentation, differential privacy, and architectural modifications (e.g., sparse coding). This active area of research is crucial for ensuring the responsible development and deployment of machine learning systems, particularly in privacy-sensitive applications.
Papers
One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy
Dayong Ye, Sheng Shen, Tianqing Zhu, Bo Liu, Wanlei Zhou
Model Inversion Attack against Transfer Learning: Inverting a Model without Accessing It
Dayong Ye, Huiqiang Chen, Shuai Zhou, Tianqing Zhu, Wanlei Zhou, Shouling Ji
Label-only Model Inversion Attack: The Attack that Requires the Least Information
Dayong Ye, Tianqing Zhu, Shuai Zhou, Bo Liu, Wanlei Zhou