Explanation Generation
Explanation generation focuses on creating human-understandable justifications for the decisions made by complex AI systems, particularly large language models (LLMs) and graph neural networks (GNNs). Current research emphasizes improving explanation quality through techniques like retrieval-augmented generation, self-rationalization, and iterative refinement using multiple LLMs, often incorporating knowledge graphs or other external knowledge sources to enhance accuracy and credibility. This field is crucial for building trust in AI systems across diverse applications, from medical diagnosis and fact-checking to recommender systems and robotics, by making their reasoning processes transparent and interpretable.
Papers
December 24, 2024
December 3, 2024
November 30, 2024
October 23, 2024
October 14, 2024
October 7, 2024
October 5, 2024
September 22, 2024
September 18, 2024
September 11, 2024
August 24, 2024
August 14, 2024
July 3, 2024
June 18, 2024
June 11, 2024
June 10, 2024
June 6, 2024
June 5, 2024
May 29, 2024