AI Vulnerability

AI vulnerability research focuses on identifying and mitigating weaknesses in artificial intelligence systems, aiming to improve their security, reliability, and trustworthiness. Current efforts concentrate on understanding vulnerabilities stemming from data footprints in trained models, susceptibility to adversarial attacks and hardware faults (like silent data corruption), and the limitations of generative AI in critical thinking tasks. This research is crucial for ensuring the safe and responsible deployment of AI across various sectors, impacting both the development of robust AI algorithms and the establishment of effective security protocols.

Papers