Latent Vulnerability

Latent vulnerability research explores the hidden weaknesses of machine learning models, particularly large language models (LLMs) and deep neural networks (DNNs), focusing on how these vulnerabilities can be exploited through adversarial attacks, backdoors, and data poisoning. Current research investigates the susceptibility of various model architectures, including transformers, generative autoencoders, and neural radiance fields, to these attacks, often employing techniques like gradient-based attacks and prompt engineering. Understanding and mitigating these vulnerabilities is crucial for ensuring the safe and reliable deployment of AI systems across diverse applications, from healthcare and finance to education and security.

Papers