Inherent Interpretability
Inherent interpretability in machine learning focuses on designing models and methods that are inherently transparent and understandable, aiming to reduce the "black box" nature of many AI systems. Current research emphasizes developing intrinsically interpretable model architectures, such as those based on decision trees, rule-based systems, and specific neural network designs (e.g., Kolmogorov-Arnold Networks), alongside techniques like feature attribution and visualization methods to enhance understanding of model behavior. This pursuit is crucial for building trust in AI, particularly in high-stakes applications like healthcare and finance, where understanding model decisions is paramount for responsible deployment and effective human-AI collaboration.
Papers
Attention Meets Post-hoc Interpretability: A Mathematical Perspective
Gianluigi Lopardo, Frederic Precioso, Damien Garreau
InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts
Vinitra Swamy, Syrielle Montariol, Julian Blackwell, Jibril Frej, Martin Jaggi, Tanja Käser
Focal Modulation Networks for Interpretable Sound Classification
Luca Della Libera, Cem Subakan, Mirco Ravanelli
Detecting mental disorder on social media: a ChatGPT-augmented explainable approach
Loris Belcastro, Riccardo Cantini, Fabrizio Marozzo, Domenico Talia, Paolo Trunfio
Rethinking Interpretability in the Era of Large Language Models
Chandan Singh, Jeevana Priya Inala, Michel Galley, Rich Caruana, Jianfeng Gao
NormEnsembleXAI: Unveiling the Strengths and Weaknesses of XAI Ensemble Techniques
Weronika Hryniewska-Guzik, Bartosz Sawicki, Przemysław Biecek
ViTree: Single-path Neural Tree for Step-wise Interpretable Fine-grained Visual Categorization
Danning Lao, Qi Liu, Jiazi Bu, Junchi Yan, Wei Shen
Widely Linear Matched Filter: A Lynchpin towards the Interpretability of Complex-valued CNNs
Qingchen Wang, Zhe Li, Zdenka Babic, Wei Deng, Ljubiša Stanković, Danilo P. Mandic
How well can large language models explain business processes?
Dirk Fahland, Fabiana Fournier, Lior Limonad, Inna Skarbovsky, Ava J. E. Swevels
A Reply to Makelov et al. (2023)'s "Interpretability Illusion" Arguments
Zhengxuan Wu, Atticus Geiger, Jing Huang, Aryaman Arora, Thomas Icard, Christopher Potts, Noah D. Goodman