Membership Inference Attack
Membership inference attacks (MIAs) aim to determine if a specific data point was used to train a machine learning model, posing a significant privacy risk. Current research focuses on evaluating MIA effectiveness across various model architectures, including large language models (LLMs), diffusion models, and vision transformers, and exploring the impact of different training methods and data characteristics on attack success. The reliability and accuracy of MIAs themselves are under scrutiny, with some studies highlighting limitations and overestimation of their capabilities, particularly in realistic settings. Understanding the vulnerabilities and limitations of MIAs is crucial for developing effective privacy-preserving techniques and for responsibly deploying machine learning models.
Papers
Synthetic is all you need: removing the auxiliary data assumption for membership inference attacks against synthetic data
Florent Guépin, Matthieu Meeus, Ana-Maria Cretu, Yves-Alexandre de Montjoye
Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction
Zitao Chen, Karthik Pattabiraman
Towards More Realistic Membership Inference Attacks on Large Diffusion Models
Jan Dubiński, Antoni Kowalczuk, Stanisław Pawlak, Przemysław Rokita, Tomasz Trzciński, Paweł Morawiecki
Set-Membership Inference Attacks using Data Watermarking
Mike Laszkiewicz, Denis Lukovnikov, Johannes Lederer, Asja Fischer