Consensus Based Explanation
Consensus-based explanation aims to improve the trustworthiness and understandability of machine learning models by generating consistent explanations across different explanation methods. Current research focuses on quantifying and mitigating disagreements between popular explanation techniques like SHAP and LIME, sometimes incorporating explanation consensus directly into model training. This work is crucial for building more reliable AI systems, particularly in high-stakes applications, by ensuring that explanations are not only accurate but also consistent and trustworthy, thereby fostering user confidence and facilitating effective human-AI collaboration.
Papers
November 4, 2024
April 22, 2023
April 15, 2023
March 23, 2023