Membership Inference Attack
Membership inference attacks (MIAs) aim to determine if a specific data point was used to train a machine learning model, posing a significant privacy risk. Current research focuses on evaluating MIA effectiveness across various model architectures, including large language models (LLMs), diffusion models, and vision transformers, and exploring the impact of different training methods and data characteristics on attack success. The reliability and accuracy of MIAs themselves are under scrutiny, with some studies highlighting limitations and overestimation of their capabilities, particularly in realistic settings. Understanding the vulnerabilities and limitations of MIAs is crucial for developing effective privacy-preserving techniques and for responsibly deploying machine learning models.
Papers
Analysis of Privacy Leakage in Federated Large Language Models
Minh N. Vu, Truc Nguyen, Tre' R. Jeter, My T. Thai
Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach
Qi Tan, Qi Li, Yi Zhao, Zhuotao Liu, Xiaobing Guo, Ke Xu
Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy
Jamie Hayes, Ilia Shumailov, Eleni Triantafillou, Amr Khalifa, Nicolas Papernot
PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining
Mishaal Kazmi, Hadrien Lautraite, Alireza Akbari, Qiaoyue Tang, Mauricio Soroco, Tao Wang, Sébastien Gambs, Mathias Lécuyer
Do Membership Inference Attacks Work on Large Language Models?
Michael Duan, Anshuman Suri, Niloofar Mireshghallah, Sewon Min, Weijia Shi, Luke Zettlemoyer, Yulia Tsvetkov, Yejin Choi, David Evans, Hannaneh Hajishirzi