Explanatory Model

Explanatory models aim to make the decision-making processes of complex AI systems, particularly predictive models, more transparent and understandable to human users. Current research focuses on developing methods to bridge the communication gap between AI and human experts, exploring techniques like Shapley additive explanations and Bayesian modeling to provide interpretable insights into model predictions, particularly in high-stakes domains like healthcare and engineering. The ultimate goal is to improve trust, facilitate critical evaluation of AI systems, and enhance the responsible integration of AI into various applications by providing clear and useful explanations of AI's reasoning.

Papers