Interpretable Way
Interpretable AI focuses on developing machine learning models whose decision-making processes are transparent and understandable, addressing the "black box" problem of many deep learning systems. Current research emphasizes creating inherently interpretable models, such as those based on decision trees, rule-based systems, and specific neural network architectures designed for explainability (e.g., concept bottleneck models), as well as developing post-hoc explanation methods like SHAP values. This pursuit of interpretability is crucial for building trust in AI systems, particularly in high-stakes domains like healthcare and finance, and for facilitating better model debugging and validation.
Papers
SSTKG: Simple Spatio-Temporal Knowledge Graph for Intepretable and Versatile Dynamic Information Embedding
Ruiyi Yang, Flora D. Salim, Hao Xue
Interpretable Brain-Inspired Representations Improve RL Performance on Visual Navigation Tasks
Moritz Lange, Raphael C. Engelhardt, Wolfgang Konen, Laurenz Wiskott
Interpretable Embedding for Ad-hoc Video Search
Jiaxin Wu, Chong-Wah Ngo