Higher NLE Quality
Improving the quality of natural language explanations (NLEs) generated by AI models is a key research area, focusing on creating explanations that are both accurate and easily understandable by humans. Current efforts involve developing unified frameworks for generating NLEs across various tasks, incorporating external knowledge bases to enhance consistency and faithfulness, and establishing rigorous evaluation metrics to assess explanation quality, including faithfulness to the model's internal reasoning. This work is crucial for building more trustworthy and transparent AI systems, fostering greater user understanding and acceptance of AI-driven decisions in diverse applications.
Papers
November 15, 2023
August 27, 2023
August 17, 2023
June 5, 2023
May 29, 2023
April 3, 2023
December 12, 2021