Interpretable Way
Interpretable AI focuses on developing machine learning models whose decision-making processes are transparent and understandable, addressing the "black box" problem of many deep learning systems. Current research emphasizes creating inherently interpretable models, such as those based on decision trees, rule-based systems, and specific neural network architectures designed for explainability (e.g., concept bottleneck models), as well as developing post-hoc explanation methods like SHAP values. This pursuit of interpretability is crucial for building trust in AI systems, particularly in high-stakes domains like healthcare and finance, and for facilitating better model debugging and validation.
Papers
October 8, 2022
September 30, 2022
September 26, 2022
August 28, 2022
August 18, 2022
August 3, 2022
July 3, 2022
July 1, 2022
June 16, 2022
June 10, 2022
May 31, 2022
April 17, 2022
April 13, 2022
April 11, 2022
March 3, 2022
February 27, 2022
January 31, 2022
January 29, 2022
January 15, 2022