Inference Attack
Inference attacks exploit machine learning model outputs to infer sensitive information about the training data, posing a significant privacy risk in various applications like federated learning and AI-as-a-service. Current research focuses on developing novel attack techniques targeting different model architectures (e.g., graph neural networks, large language models) and data modalities, as well as designing robust defenses such as differential privacy and adversarial training. Understanding and mitigating these attacks is crucial for ensuring the responsible deployment of machine learning systems and protecting user privacy in collaborative and cloud-based settings.
Papers
September 2, 2024
August 29, 2024
July 25, 2024
July 20, 2024
July 2, 2024
June 4, 2024
April 27, 2024
March 13, 2024
March 4, 2024
February 29, 2024
February 22, 2024
January 24, 2024
November 3, 2023
October 16, 2023
October 13, 2023
September 20, 2023
August 2, 2023
June 4, 2023