Inherent Interpretability
Inherent interpretability in machine learning focuses on designing models and methods that are inherently transparent and understandable, aiming to reduce the "black box" nature of many AI systems. Current research emphasizes developing intrinsically interpretable model architectures, such as those based on decision trees, rule-based systems, and specific neural network designs (e.g., Kolmogorov-Arnold Networks), alongside techniques like feature attribution and visualization methods to enhance understanding of model behavior. This pursuit is crucial for building trust in AI, particularly in high-stakes applications like healthcare and finance, where understanding model decisions is paramount for responsible deployment and effective human-AI collaboration.
Papers
Understanding Video Transformers for Segmentation: A Survey of Application and Interpretability
Rezaul Karim, Richard P. Wildes
A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation
Giuseppe Attanasio, Flor Miriam Plaza-del-Arco, Debora Nozza, Anne Lauscher
Interpretable Spectral Variational AutoEncoder (ISVAE) for time series clustering
Óscar Jiménez Rama, Fernando Moreno-Pino, David Ramírez, Pablo M. Olmos
A Uniform Language to Explain Decision Trees
Marcelo Arenas, Pablo Barcelo, Diego Bustamante, Jose Caraball, Bernardo Subercaseaux
Neural Harmonium: An Interpretable Deep Structure for Nonlinear Dynamic System Identification with Application to Audio Processing
Karim Helwani, Erfan Soltanmohammadi, Michael M. Goodwin
On the Interpretability of Part-Prototype Based Classifiers: A Human Centric Analysis
Omid Davoodi, Shayan Mohammadizadehsamakosh, Majid Komeili