Privacy Threat

Privacy threats in machine learning, particularly within federated learning and large language models, are a significant area of concern, focusing on the risk of data leakage and model manipulation. Current research investigates various attack vectors, including membership inference attacks, poisoning attacks, and gradient-based data reconstruction, across diverse model architectures like graph neural networks and stable diffusion models. Understanding and mitigating these vulnerabilities is crucial for ensuring the responsible development and deployment of machine learning systems, impacting both the trustworthiness of AI and the protection of sensitive user data.

Papers