Model Based Explanation
Model-based explanation aims to make the decisions of machine learning models understandable and trustworthy, addressing the "black box" problem hindering wider adoption. Current research focuses on developing faster, more general explanation methods, including model-agnostic approaches that don't rely on specific model architectures, and evaluating the robustness and reliability of these explanations, particularly in high-stakes domains like medicine. This work is crucial for building confidence in AI systems and ensuring responsible deployment across various applications, from healthcare to safety-critical systems, by providing human-interpretable insights into model behavior.
Papers
May 29, 2024
February 28, 2024
November 26, 2023
October 18, 2023
March 16, 2023