Explanation Generation
Explanation generation focuses on creating human-understandable justifications for the decisions made by complex AI systems, particularly large language models (LLMs) and graph neural networks (GNNs). Current research emphasizes improving explanation quality through techniques like retrieval-augmented generation, self-rationalization, and iterative refinement using multiple LLMs, often incorporating knowledge graphs or other external knowledge sources to enhance accuracy and credibility. This field is crucial for building trust in AI systems across diverse applications, from medical diagnosis and fact-checking to recommender systems and robotics, by making their reasoning processes transparent and interpretable.
Papers
May 22, 2023
March 21, 2023
February 21, 2023
January 25, 2023
December 19, 2022
November 21, 2022
November 7, 2022
October 14, 2022
October 13, 2022
October 9, 2022
September 28, 2022
September 8, 2022
September 2, 2022
August 17, 2022
August 2, 2022
June 17, 2022
June 15, 2022
May 25, 2022