Trustworthy Autonomous
Trustworthy autonomous systems, particularly in autonomous vehicles, aim to build reliable and safe systems by addressing challenges in explainability, safety verification, and human-AI collaboration. Current research focuses on developing explainable AI (XAI) methods to enhance transparency and user trust, integrating physics-based models with reinforcement learning for improved safety and robustness, and establishing rigorous safety case frameworks for verification and validation. These advancements are crucial for ensuring the safe and responsible deployment of autonomous systems across various applications, impacting both the scientific understanding of AI safety and the practical realization of autonomous technologies.
Papers
October 20, 2024
September 9, 2024
September 1, 2024
April 8, 2024
April 4, 2024
March 25, 2024
March 19, 2024
February 8, 2024
January 29, 2024
July 19, 2023