AI Vulnerability
AI vulnerability research focuses on identifying and mitigating weaknesses in artificial intelligence systems, aiming to improve their security, reliability, and trustworthiness. Current efforts concentrate on understanding vulnerabilities stemming from data footprints in trained models, susceptibility to adversarial attacks and hardware faults (like silent data corruption), and the limitations of generative AI in critical thinking tasks. This research is crucial for ensuring the safe and responsible deployment of AI across various sectors, impacting both the development of robust AI algorithms and the establishment of effective security protocols.
Papers
July 2, 2024
June 20, 2024
May 26, 2024
May 2, 2024
April 7, 2024
January 5, 2024
August 23, 2023
June 14, 2023
May 23, 2023