Membership Inference Attack
Membership inference attacks (MIAs) aim to determine if a specific data point was used to train a machine learning model, posing a significant privacy risk. Current research focuses on evaluating MIA effectiveness across various model architectures, including large language models (LLMs), diffusion models, and vision transformers, and exploring the impact of different training methods and data characteristics on attack success. The reliability and accuracy of MIAs themselves are under scrutiny, with some studies highlighting limitations and overestimation of their capabilities, particularly in realistic settings. Understanding the vulnerabilities and limitations of MIAs is crucial for developing effective privacy-preserving techniques and for responsibly deploying machine learning models.
Papers
MGMD-GAN: Generalization Improvement of Generative Adversarial Networks with Multiple Generator Multiple Discriminator Framework Against Membership Inference Attacks
Nirob Arefin
Provable Privacy Attacks on Trained Shallow Neural Networks
Guy Smorodinsky, Gal Vardi, Itay Safran
Detecting Training Data of Large Language Models via Expectation Maximization
Gyuwan Kim, Yang Li, Evangelia Spiliopoulou, Jie Ma, Miguel Ballesteros, William Yang Wang
Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using Jailbreaking
Stav Cohen, Ron Bitton, Ben Nassi
Generated Data with Fake Privacy: Hidden Dangers of Fine-tuning Large Language Models on Generated Data
Atilla Akkus, Mingjie Li, Junjie Chu, Michael Backes, Yang Zhang, Sinem Sav