Faithful Explanation
Faithful explanation in explainable AI (XAI) focuses on developing methods that accurately reflect a model's decision-making process, enhancing trust and understanding. Current research emphasizes robust evaluation frameworks to assess explanation fidelity, exploring techniques like counterfactual generation, rule extraction, and attention mechanisms within various model architectures including graph neural networks and large language models. This pursuit of faithful explanations is crucial for building trustworthy AI systems across diverse domains, particularly in high-stakes applications like healthcare and finance, where reliable interpretations of model predictions are paramount.
Papers
November 18, 2024
November 5, 2024
October 3, 2024
September 22, 2024
September 19, 2024
June 21, 2024
June 2, 2024
April 30, 2024
March 12, 2024
March 11, 2024
February 15, 2024
February 12, 2024
February 7, 2024
January 25, 2024
January 18, 2024
December 29, 2023
December 21, 2023
December 15, 2023
November 15, 2023