Inherent Interpretability
Inherent interpretability in machine learning focuses on designing models and methods that are inherently transparent and understandable, aiming to reduce the "black box" nature of many AI systems. Current research emphasizes developing intrinsically interpretable model architectures, such as those based on decision trees, rule-based systems, and specific neural network designs (e.g., Kolmogorov-Arnold Networks), alongside techniques like feature attribution and visualization methods to enhance understanding of model behavior. This pursuit is crucial for building trust in AI, particularly in high-stakes applications like healthcare and finance, where understanding model decisions is paramount for responsible deployment and effective human-AI collaboration.
Papers
Benchmarking and Enhancing Disentanglement in Concept-Residual Models
Renos Zabounidis, Ini Oguntola, Konghao Zhao, Joseph Campbell, Simon Stepputtis, Katia Sycara
CLIP-QDA: An Explainable Concept Bottleneck Model
Rémi Kazmierczak, Eloïse Berthier, Goran Frehse, Gianni Franchi
A data-science pipeline to enable the Interpretability of Many-Objective Feature Selection
Uchechukwu F. Njoku, Alberto Abelló, Besim Bilalli, Gianluca Bontempi
Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching
Aleksandar Makelov, Georg Lange, Neel Nanda
XAI for time-series classification leveraging image highlight methods
Georgios Makridis, Georgios Fatouros, Vasileios Koukos, Dimitrios Kotios, Dimosthenis Kyriazis, Ioannis Soldatos