Inherent Interpretability
Inherent interpretability in machine learning focuses on designing models and methods that are inherently transparent and understandable, aiming to reduce the "black box" nature of many AI systems. Current research emphasizes developing intrinsically interpretable model architectures, such as those based on decision trees, rule-based systems, and specific neural network designs (e.g., Kolmogorov-Arnold Networks), alongside techniques like feature attribution and visualization methods to enhance understanding of model behavior. This pursuit is crucial for building trust in AI, particularly in high-stakes applications like healthcare and finance, where understanding model decisions is paramount for responsible deployment and effective human-AI collaboration.
Papers
Hint-AD: Holistically Aligned Interpretability in End-to-End Autonomous Driving
Kairui Ding, Boyuan Chen, Yuchen Su, Huan-ang Gao, Bu Jin, Chonghao Sima, Wuqiang Zhang, Xiaohui Li, Paul Barsch, Hongyang Li, Hao Zhao
Quantifying and Enabling the Interpretability of CLIP-like Models
Avinash Madasu, Yossi Gandelsman, Vasudev Lal, Phillip Howard
A Quantitative Approach for Evaluating Disease Focus and Interpretability of Deep Learning Models for Alzheimer's Disease Classification
Thomas Yu Chow Tam, Litian Liang, Ke Chen, Haohan Wang, Wei Wu
Top-GAP: Integrating Size Priors in CNNs for more Interpretability, Robustness, and Bias Mitigation
Lars Nieradzik, Henrike Stephani, Janis Keuper
Report Cards: Qualitative Evaluation of Language Models Using Natural Language Summaries
Blair Yang, Fuyang Cui, Keiran Paster, Jimmy Ba, Pashootan Vaezipoor, Silviu Pitis, Michael R. Zhang
Enabling Trustworthy Federated Learning in Industrial IoT: Bridging the Gap Between Interpretability and Robustness
Senthil Kumar Jagatheesaperumal, Mohamed Rahouti, Ali Alfatemi, Nasir Ghani, Vu Khanh Quy, Abdellah Chehri
Interpretable Clustering: A Survey
Lianyu Hu, Mudi Jiang, Junjie Dong, Xinying Liu, Zengyou He