Latent Vulnerability
Latent vulnerability research explores the hidden weaknesses of machine learning models, particularly large language models (LLMs) and deep neural networks (DNNs), focusing on how these vulnerabilities can be exploited through adversarial attacks, backdoors, and data poisoning. Current research investigates the susceptibility of various model architectures, including transformers, generative autoencoders, and neural radiance fields, to these attacks, often employing techniques like gradient-based attacks and prompt engineering. Understanding and mitigating these vulnerabilities is crucial for ensuring the safe and reliable deployment of AI systems across diverse applications, from healthcare and finance to education and security.
Papers
May 2, 2023
March 24, 2023
March 13, 2023
January 19, 2023
January 12, 2023
December 10, 2022
November 29, 2022
November 21, 2022
November 20, 2022
November 16, 2022
October 28, 2022
October 15, 2022
October 5, 2022
September 13, 2022
July 11, 2022
June 22, 2022
May 22, 2022
February 12, 2022
January 10, 2022
December 20, 2021