Latent Vulnerability
Latent vulnerability research explores the hidden weaknesses of machine learning models, particularly large language models (LLMs) and deep neural networks (DNNs), focusing on how these vulnerabilities can be exploited through adversarial attacks, backdoors, and data poisoning. Current research investigates the susceptibility of various model architectures, including transformers, generative autoencoders, and neural radiance fields, to these attacks, often employing techniques like gradient-based attacks and prompt engineering. Understanding and mitigating these vulnerabilities is crucial for ensuring the safe and reliable deployment of AI systems across diverse applications, from healthcare and finance to education and security.
Papers
November 12, 2024
October 29, 2024
October 26, 2024
September 12, 2024
September 3, 2024
September 1, 2024
August 17, 2024
August 7, 2024
June 25, 2024
May 31, 2024
April 15, 2024
March 13, 2024
February 23, 2024
February 21, 2024
January 20, 2024
January 18, 2024
December 17, 2023
December 10, 2023