Natural Language Explanation
Natural language explanation (NLE) research focuses on generating human-understandable explanations for AI model decisions, aiming to improve transparency, trust, and user understanding. Current efforts concentrate on developing methods to generate accurate, consistent, and faithful explanations using large language models (LLMs), often augmented with knowledge graphs or retrieval mechanisms, and evaluating these explanations using both automatic metrics and human assessments. This field is significant for enhancing the trustworthiness and usability of AI systems across diverse applications, from medicine and law to education and robotics, by bridging the gap between complex model outputs and human comprehension.
Papers
October 27, 2023
October 19, 2023
October 16, 2023
October 6, 2023
October 2, 2023
September 30, 2023
September 25, 2023
September 20, 2023
September 19, 2023
August 30, 2023
August 27, 2023
August 17, 2023
August 3, 2023
July 17, 2023
July 11, 2023
July 2, 2023
June 22, 2023
June 15, 2023