Interpretable AI
Interpretable AI (IAI) aims to create artificial intelligence systems whose decision-making processes are transparent and understandable, addressing concerns about "black box" models. Current research focuses on developing methods to quantify and improve the consistency of explanations, applying these techniques to various model architectures including deep learning networks, and adapting game-theoretic approaches like Shapley values for improved interpretability. This work is crucial for building trust in AI systems across diverse fields like healthcare, finance, and legal applications, where understanding the reasoning behind AI decisions is paramount for responsible deployment and effective human-AI collaboration.
Papers
December 7, 2023
December 2, 2023
November 11, 2023
November 1, 2023
October 4, 2023
September 24, 2023
August 10, 2023
July 2, 2023
April 25, 2023
February 19, 2023
September 21, 2022
June 16, 2022
May 13, 2022
April 8, 2022
April 5, 2022