Explanation Uncertainty
Explanation uncertainty in machine learning focuses on understanding and quantifying the reliability of model explanations, aiming to improve trust and interpretability. Current research investigates methods to estimate this uncertainty, often integrating it with explanation techniques like gradient-based methods, SHAP values, and prototype-based networks, and employing approaches such as bootstrapping and Bayesian methods. This work is crucial for building trustworthy AI systems across diverse applications, from medical image analysis to autonomous driving, by providing insights into model limitations and enhancing the reliability of predictions.
Papers
August 7, 2024
March 25, 2024
March 20, 2024
January 30, 2024
December 12, 2023
November 10, 2023
July 4, 2023
January 13, 2023
October 5, 2022
August 5, 2022
January 27, 2022