Textual Explanation
Textual explanation in AI aims to make complex model decisions understandable by generating human-readable descriptions of their reasoning processes. Current research focuses on improving the coherence, faithfulness, and utility of these explanations across diverse applications, leveraging large language models (LLMs) and other techniques to generate explanations for tasks ranging from robot failures to medical diagnoses and recommendation systems. This work is crucial for building trust in AI systems, facilitating debugging and model improvement, and enabling effective human-AI collaboration in various domains. The development of robust evaluation metrics for textual explanations remains a key challenge.
Papers
October 21, 2024
October 1, 2024
June 27, 2024
June 1, 2024
May 30, 2024
May 7, 2024
May 2, 2024
March 19, 2024
February 19, 2024
January 31, 2024
January 18, 2024
January 2, 2024
November 13, 2023
October 22, 2023
October 16, 2023
October 6, 2023
October 3, 2023
September 16, 2023
September 11, 2023