Latent Vulnerability
Latent vulnerability research explores the hidden weaknesses of machine learning models, particularly large language models (LLMs) and deep neural networks (DNNs), focusing on how these vulnerabilities can be exploited through adversarial attacks, backdoors, and data poisoning. Current research investigates the susceptibility of various model architectures, including transformers, generative autoencoders, and neural radiance fields, to these attacks, often employing techniques like gradient-based attacks and prompt engineering. Understanding and mitigating these vulnerabilities is crucial for ensuring the safe and reliable deployment of AI systems across diverse applications, from healthcare and finance to education and security.
Papers
November 27, 2023
November 23, 2023
September 21, 2023
July 25, 2023
July 21, 2023
July 11, 2023
July 6, 2023
July 5, 2023
June 10, 2023
June 6, 2023
June 1, 2023
May 2, 2023
March 24, 2023
March 13, 2023
January 19, 2023
January 12, 2023
December 10, 2022
November 29, 2022
November 21, 2022