Interpretable Way
Interpretable AI focuses on developing machine learning models whose decision-making processes are transparent and understandable, addressing the "black box" problem of many deep learning systems. Current research emphasizes creating inherently interpretable models, such as those based on decision trees, rule-based systems, and specific neural network architectures designed for explainability (e.g., concept bottleneck models), as well as developing post-hoc explanation methods like SHAP values. This pursuit of interpretability is crucial for building trust in AI systems, particularly in high-stakes domains like healthcare and finance, and for facilitating better model debugging and validation.
Papers
July 6, 2023
June 2, 2023
May 17, 2023
May 7, 2023
May 4, 2023
April 30, 2023
April 21, 2023
April 19, 2023
March 7, 2023
February 5, 2023
January 30, 2023
January 17, 2023
January 15, 2023
November 9, 2022
October 18, 2022
October 8, 2022
September 30, 2022
September 26, 2022
August 28, 2022
August 18, 2022