Latent Vulnerability
Latent vulnerability research explores the hidden weaknesses of machine learning models, particularly large language models (LLMs) and deep neural networks (DNNs), focusing on how these vulnerabilities can be exploited through adversarial attacks, backdoors, and data poisoning. Current research investigates the susceptibility of various model architectures, including transformers, generative autoencoders, and neural radiance fields, to these attacks, often employing techniques like gradient-based attacks and prompt engineering. Understanding and mitigating these vulnerabilities is crucial for ensuring the safe and reliable deployment of AI systems across diverse applications, from healthcare and finance to education and security.
Papers
Can LLM Prompting Serve as a Proxy for Static Analysis in Vulnerability Detection
Ira Ceka, Feitong Qiao, Anik Dey, Aastha Valechia, Gail Kaiser, Baishakhi Ray
Using Instruction-Tuned Large Language Models to Identify Indicators of Vulnerability in Police Incident Narratives
Sam Relins, Daniel Birks, Charlie Lloyd