Textual Explanation
Textual explanation in AI aims to make complex model decisions understandable by generating human-readable descriptions of their reasoning processes. Current research focuses on improving the coherence, faithfulness, and utility of these explanations across diverse applications, leveraging large language models (LLMs) and other techniques to generate explanations for tasks ranging from robot failures to medical diagnoses and recommendation systems. This work is crucial for building trust in AI systems, facilitating debugging and model improvement, and enabling effective human-AI collaboration in various domains. The development of robust evaluation metrics for textual explanations remains a key challenge.
Papers
July 19, 2023
May 30, 2023
May 18, 2023
April 17, 2023
April 12, 2023
September 2, 2022
June 24, 2022
May 24, 2022
May 15, 2022
May 11, 2022
January 4, 2022