Likelihood Ratio Attack
Likelihood ratio attacks are a class of membership inference attacks designed to determine if a specific data point was used to train a machine learning model, primarily focusing on deep neural networks and diffusion models. Current research explores variations of the likelihood ratio attack, including those leveraging knowledge distillation and adversarial perturbations to improve accuracy, particularly at low false positive rates. These attacks highlight vulnerabilities in model training data privacy and are crucial for evaluating and improving the security and robustness of machine learning systems across various applications, including biometric protection and sensitive data analysis.
Papers
November 2, 2024
September 19, 2024
May 13, 2024
February 16, 2024
October 10, 2023
July 11, 2023
January 24, 2023
October 24, 2022