Textual Explanation

Textual explanation in AI aims to make complex model decisions understandable by generating human-readable descriptions of their reasoning processes. Current research focuses on improving the coherence, faithfulness, and utility of these explanations across diverse applications, leveraging large language models (LLMs) and other techniques to generate explanations for tasks ranging from robot failures to medical diagnoses and recommendation systems. This work is crucial for building trust in AI systems, facilitating debugging and model improvement, and enabling effective human-AI collaboration in various domains. The development of robust evaluation metrics for textual explanations remains a key challenge.

Papers