Explanatory Model
Explanatory models aim to make the decision-making processes of complex AI systems, particularly predictive models, more transparent and understandable to human users. Current research focuses on developing methods to bridge the communication gap between AI and human experts, exploring techniques like Shapley additive explanations and Bayesian modeling to provide interpretable insights into model predictions, particularly in high-stakes domains like healthcare and engineering. The ultimate goal is to improve trust, facilitate critical evaluation of AI systems, and enhance the responsible integration of AI into various applications by providing clear and useful explanations of AI's reasoning.
Papers
November 1, 2024
June 13, 2024
November 30, 2023
June 4, 2023
October 11, 2022
September 6, 2022
April 11, 2022