Inference Attack
Inference attacks exploit machine learning model outputs to infer sensitive information about the training data, posing a significant privacy risk in various applications like federated learning and AI-as-a-service. Current research focuses on developing novel attack techniques targeting different model architectures (e.g., graph neural networks, large language models) and data modalities, as well as designing robust defenses such as differential privacy and adversarial training. Understanding and mitigating these attacks is crucial for ensuring the responsible deployment of machine learning systems and protecting user privacy in collaborative and cloud-based settings.
Papers
November 8, 2024
November 2, 2024
October 26, 2024
October 25, 2024
October 22, 2024
October 17, 2024
October 14, 2024
October 7, 2024
September 28, 2024
September 2, 2024
August 29, 2024
July 25, 2024
July 20, 2024
July 2, 2024
June 4, 2024
April 27, 2024
March 13, 2024
March 4, 2024
February 29, 2024