Threat Word

"Threat word" research encompasses the vulnerabilities of various AI systems to malicious manipulation, focusing on how adversarial attacks compromise their functionality, safety, and trustworthiness. Current research investigates these threats across diverse AI applications, including autonomous vehicles (using LiDAR and vision-language models), pricing algorithms, federated learning, and large language models (LLMs), employing techniques like adversarial examples, data poisoning, and prompt injection. Understanding and mitigating these vulnerabilities is crucial for ensuring the responsible development and deployment of AI, impacting fields ranging from transportation safety to economic fairness and online security.

Papers