Robust AI
Robust AI research aims to create artificial intelligence systems that are reliable, resilient, and trustworthy, even when faced with unexpected inputs, adversarial attacks, or changing environments. Current efforts focus on developing defenses against adversarial examples and prompt injection attacks, improving model interpretability and explainability, and designing robust architectures such as hybrid neuro-symbolic systems and hierarchical reinforcement learning frameworks. This work is crucial for ensuring the safe and effective deployment of AI in high-stakes applications like healthcare, finance, and autonomous systems, ultimately fostering greater trust and wider adoption of AI technologies.
Papers
October 31, 2024
October 4, 2024
September 10, 2024
August 8, 2024
June 18, 2024
June 1, 2024
May 2, 2024
April 17, 2024
February 8, 2024
January 31, 2024
October 27, 2023
August 18, 2023
June 14, 2023
April 21, 2023
January 8, 2023
October 17, 2022
July 28, 2022
July 26, 2022
July 16, 2022
April 29, 2022