Natural Language Explanation
Natural language explanation (NLE) research focuses on generating human-understandable explanations for AI model decisions, aiming to improve transparency, trust, and user understanding. Current efforts concentrate on developing methods to generate accurate, consistent, and faithful explanations using large language models (LLMs), often augmented with knowledge graphs or retrieval mechanisms, and evaluating these explanations using both automatic metrics and human assessments. This field is significant for enhancing the trustworthiness and usability of AI systems across diverse applications, from medicine and law to education and robotics, by bridging the gap between complex model outputs and human comprehension.
Papers
November 21, 2022
November 14, 2022
November 7, 2022
September 19, 2022
September 12, 2022
September 11, 2022
July 23, 2022
July 9, 2022
June 30, 2022
May 4, 2022
May 3, 2022
April 26, 2022
April 14, 2022
April 5, 2022
March 9, 2022
February 20, 2022
January 16, 2022
December 12, 2021