Explanation Model
Explanation models aim to make the decision-making processes of complex machine learning models more transparent and understandable. Current research focuses on developing methods to generate faithful and interpretable explanations, often employing techniques like perturbation analysis, attention mechanisms within transformer-based models, and rule-based surrogate models to achieve this. These efforts are crucial for building trust in AI systems, improving model debugging, and facilitating human-AI collaboration across diverse applications, from autonomous driving to medical diagnosis.
Papers
November 4, 2024
October 15, 2024
September 13, 2024
July 3, 2024
May 30, 2024
May 12, 2024
April 27, 2024
April 15, 2024
February 29, 2024
January 24, 2024
January 7, 2024
December 9, 2023
October 25, 2023
August 28, 2023
July 10, 2023
May 22, 2023
April 12, 2023
April 11, 2023
January 19, 2023