Latent Vulnerability
Latent vulnerability research explores the hidden weaknesses of machine learning models, particularly large language models (LLMs) and deep neural networks (DNNs), focusing on how these vulnerabilities can be exploited through adversarial attacks, backdoors, and data poisoning. Current research investigates the susceptibility of various model architectures, including transformers, generative autoencoders, and neural radiance fields, to these attacks, often employing techniques like gradient-based attacks and prompt engineering. Understanding and mitigating these vulnerabilities is crucial for ensuring the safe and reliable deployment of AI systems across diverse applications, from healthcare and finance to education and security.
Papers
February 23, 2024
February 21, 2024
January 20, 2024
January 18, 2024
December 17, 2023
December 10, 2023
November 29, 2023
November 27, 2023
November 23, 2023
September 21, 2023
July 25, 2023
July 21, 2023
July 11, 2023
July 6, 2023
July 5, 2023
June 10, 2023
June 6, 2023
June 1, 2023
May 2, 2023