Failure Explanation
Failure explanation research focuses on understanding and addressing shortcomings in various artificial intelligence systems, aiming to improve their reliability and trustworthiness. Current efforts concentrate on identifying and classifying failure modes across diverse applications, from object detection in images to complex robotic tasks and large language model outputs, often employing techniques like Bayesian networks, clustering algorithms, and distance metrics to analyze model behavior and pinpoint causes of errors. This work is crucial for enhancing the robustness and explainability of AI systems, leading to improved performance and increased user confidence in their applications across various domains.
Papers
May 22, 2024
April 12, 2024
February 29, 2024
February 24, 2024
August 21, 2023
June 27, 2023
January 27, 2023
April 9, 2022