Privacy Threat
Privacy threats in machine learning, particularly within federated learning and large language models, are a significant area of concern, focusing on the risk of data leakage and model manipulation. Current research investigates various attack vectors, including membership inference attacks, poisoning attacks, and gradient-based data reconstruction, across diverse model architectures like graph neural networks and stable diffusion models. Understanding and mitigating these vulnerabilities is crucial for ensuring the responsible development and deployment of machine learning systems, impacting both the trustworthiness of AI and the protection of sensitive user data.
Papers
September 30, 2024
September 27, 2024
August 23, 2024
July 25, 2024
July 1, 2024
February 7, 2024
February 6, 2024
February 1, 2024
January 25, 2024
December 29, 2023
November 15, 2023
November 7, 2023
July 25, 2023
April 6, 2023