Label Inference Attack

Label inference attacks exploit vulnerabilities in privacy-preserving machine learning frameworks, such as federated learning, to deduce sensitive labels from seemingly anonymized data. Current research focuses on developing and analyzing these attacks across various model architectures, including tree-based models, graph neural networks, and those employing homomorphic encryption, with a particular emphasis on vertical federated learning and split learning scenarios. Understanding and mitigating these attacks is crucial for ensuring the responsible deployment of privacy-preserving machine learning in sensitive applications like healthcare and finance, where data privacy is paramount.

Papers