Inference Attack
Inference attacks exploit machine learning model outputs to infer sensitive information about the training data, posing a significant privacy risk in various applications like federated learning and AI-as-a-service. Current research focuses on developing novel attack techniques targeting different model architectures (e.g., graph neural networks, large language models) and data modalities, as well as designing robust defenses such as differential privacy and adversarial training. Understanding and mitigating these attacks is crucial for ensuring the responsible deployment of machine learning systems and protecting user privacy in collaborative and cloud-based settings.
Papers
Truthful Text Sanitization Guided by Inference Attacks
Ildikó Pilán, Benet Manzanares-Salor, David Sánchez, Pierre Lison
Scrutinizing the Vulnerability of Decentralized Learning to Membership Inference Attacks
Ousmane Touat, Jezekael Brunon, Yacine Belal, Julien Nicolas, Mohamed Maouche, César Sabater, Sonia Ben Mokhtar