Explanation Model
Explanation models aim to make the decision-making processes of complex machine learning models more transparent and understandable. Current research focuses on developing methods to generate faithful and interpretable explanations, often employing techniques like perturbation analysis, attention mechanisms within transformer-based models, and rule-based surrogate models to achieve this. These efforts are crucial for building trust in AI systems, improving model debugging, and facilitating human-AI collaboration across diverse applications, from autonomous driving to medical diagnosis.
Papers
October 30, 2022
October 18, 2022
June 30, 2022
June 27, 2022
May 6, 2022
March 30, 2022
March 9, 2022
December 18, 2021