Threat Word
"Threat word" research encompasses the vulnerabilities of various AI systems to malicious manipulation, focusing on how adversarial attacks compromise their functionality, safety, and trustworthiness. Current research investigates these threats across diverse AI applications, including autonomous vehicles (using LiDAR and vision-language models), pricing algorithms, federated learning, and large language models (LLMs), employing techniques like adversarial examples, data poisoning, and prompt injection. Understanding and mitigating these vulnerabilities is crucial for ensuring the responsible development and deployment of AI, impacting fields ranging from transportation safety to economic fairness and online security.
Papers
November 13, 2024
November 5, 2024
September 30, 2024
September 20, 2024
September 6, 2024
September 3, 2024
August 7, 2024
July 16, 2024
July 11, 2024
July 9, 2024
July 6, 2024
June 10, 2024
June 4, 2024
May 28, 2024
May 17, 2024
March 20, 2024
March 19, 2024
March 18, 2024
February 20, 2024