Property Inference Attack
Property inference attacks aim to extract sensitive statistical properties of a machine learning model's training data, without direct access to that data. Current research focuses on developing and evaluating these attacks against various model architectures, including federated learning systems, diffusion models, and generative adversarial networks (GANs), often leveraging techniques like shadow models and gradient analysis. This area is significant because it highlights vulnerabilities in data privacy when sharing or deploying machine learning models, impacting both the security of sensitive datasets and the fairness of model outputs. The development of effective defenses against these attacks is a crucial area of ongoing research.
Papers
November 6, 2024
October 17, 2024
October 7, 2024
May 24, 2024
February 28, 2024
June 8, 2023
March 21, 2023
November 8, 2022
September 8, 2022
September 2, 2022
August 25, 2022
May 18, 2022