Explanation Generation
Explanation generation focuses on creating human-understandable justifications for the decisions made by complex AI systems, particularly large language models (LLMs) and graph neural networks (GNNs). Current research emphasizes improving explanation quality through techniques like retrieval-augmented generation, self-rationalization, and iterative refinement using multiple LLMs, often incorporating knowledge graphs or other external knowledge sources to enhance accuracy and credibility. This field is crucial for building trust in AI systems across diverse applications, from medical diagnosis and fact-checking to recommender systems and robotics, by making their reasoning processes transparent and interpretable.
Papers
Explaining Preference-driven Schedules: the EXPRES Framework
Alberto Pozanco, Francesca Mosca, Parisa Zehtabi, Daniele Magazzeni, Sarit Kraus
E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning
Jiangjie Chen, Rui Xu, Ziquan Fu, Wei Shi, Zhongqiao Li, Xinbo Zhang, Changzhi Sun, Lei Li, Yanghua Xiao, Hao Zhou