Label Inference Attack
Label inference attacks exploit vulnerabilities in privacy-preserving machine learning frameworks, such as federated learning, to deduce sensitive labels from seemingly anonymized data. Current research focuses on developing and analyzing these attacks across various model architectures, including tree-based models, graph neural networks, and those employing homomorphic encryption, with a particular emphasis on vertical federated learning and split learning scenarios. Understanding and mitigating these attacks is crucial for ensuring the responsible deployment of privacy-preserving machine learning in sensitive applications like healthcare and finance, where data privacy is paramount.
Papers
October 11, 2024
June 22, 2024
June 4, 2024
April 18, 2024
April 6, 2024
August 18, 2023
August 4, 2023
July 19, 2023
May 15, 2023
January 18, 2023
January 1, 2023
September 8, 2022
August 18, 2022
May 9, 2022
March 10, 2022
February 25, 2022
December 10, 2021