Explanation Generation
Explanation generation focuses on creating human-understandable justifications for the decisions made by complex AI systems, particularly large language models (LLMs) and graph neural networks (GNNs). Current research emphasizes improving explanation quality through techniques like retrieval-augmented generation, self-rationalization, and iterative refinement using multiple LLMs, often incorporating knowledge graphs or other external knowledge sources to enhance accuracy and credibility. This field is crucial for building trust in AI systems across diverse applications, from medical diagnosis and fact-checking to recommender systems and robotics, by making their reasoning processes transparent and interpretable.
Papers
February 19, 2024
February 16, 2024
February 9, 2024
February 6, 2024
January 23, 2024
January 4, 2024
December 8, 2023
December 4, 2023
November 30, 2023
November 21, 2023
October 2, 2023
September 29, 2023
September 20, 2023
September 19, 2023
September 15, 2023
August 30, 2023
August 27, 2023
July 15, 2023
May 27, 2023
May 22, 2023