Explanation Generation
Explanation generation focuses on creating human-understandable justifications for the decisions made by complex AI systems, particularly large language models (LLMs) and graph neural networks (GNNs). Current research emphasizes improving explanation quality through techniques like retrieval-augmented generation, self-rationalization, and iterative refinement using multiple LLMs, often incorporating knowledge graphs or other external knowledge sources to enhance accuracy and credibility. This field is crucial for building trust in AI systems across diverse applications, from medical diagnosis and fact-checking to recommender systems and robotics, by making their reasoning processes transparent and interpretable.
Papers
Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models
Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, Ji-Rong Wen
Beyond Labels: Empowering Human Annotators with Natural Language Explanations through a Novel Active-Learning Architecture
Bingsheng Yao, Ishan Jindal, Lucian Popa, Yannis Katsis, Sayan Ghosh, Lihong He, Yuxuan Lu, Shashank Srivastava, Yunyao Li, James Hendler, Dakuo Wang