Inherent Interpretability
Inherent interpretability in machine learning focuses on designing models and methods that are inherently transparent and understandable, aiming to reduce the "black box" nature of many AI systems. Current research emphasizes developing intrinsically interpretable model architectures, such as those based on decision trees, rule-based systems, and specific neural network designs (e.g., Kolmogorov-Arnold Networks), alongside techniques like feature attribution and visualization methods to enhance understanding of model behavior. This pursuit is crucial for building trust in AI, particularly in high-stakes applications like healthcare and finance, where understanding model decisions is paramount for responsible deployment and effective human-AI collaboration.
Papers
Comparing Bottom-Up and Top-Down Steering Approaches on In-Context Learning Tasks
Madeline Brumley, Joe Kwon, David Krueger, Dmitrii Krasheninnikov, Usman Anwar
Enhancing Phishing Detection through Feature Importance Analysis and Explainable AI: A Comparative Study of CatBoost, XGBoost, and EBM Models
Abdullah Fajar, Setiadi Yazid, Indra Budi
A Two-Step Concept-Based Approach for Enhanced Interpretability and Trust in Skin Lesion Diagnosis
Cristiano Patrício, Luís F. Teixeira, João C. Neves
Decoding Report Generators: A Cyclic Vision-Language Adapter for Counterfactual Explanations
Yingying Fang, Zihao Jin, Shaojie Guo, Jinda Liu, Yijian Gao, Junzhi Ning, Zhiling Yue, Zhi Li, Simon LF Walsh, Guang Yang
Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
Luis M. Lopez-Ramos, Florian Leiser, Aditya Rastogi, Steven Hicks, Inga Strümke, Vince I. Madai, Tobias Budig, Ali Sunyaev, Adam Hilbert
Explainable AI through a Democratic Lens: DhondtXAI for Proportional Feature Importance Using the D'Hondt Method
Turker Berk Donmez
Towards Unifying Interpretability and Control: Evaluation via Intervention
Usha Bhalla, Suraj Srinivas, Asma Ghandeharioun, Himabindu Lakkaraju
Local vs distributed representations: What is the right basis for interpretability?
Julien Colin, Lore Goetschalckx, Thomas Fel, Victor Boutin, Jay Gopal, Thomas Serre, Nuria Oliver
Human-in-the-Loop Feature Selection Using Interpretable Kolmogorov-Arnold Network-based Double Deep Q-Network
Md Abrar Jahin, M. F. Mridha, Nilanjan Dey
Deep Trees for (Un)structured Data: Tractability, Performance, and Interpretability
Dimitris Bertsimas, Lisa Everest, Jiayi Gu, Matthew Peroni, Vasiliki Stoumpou
Towards Multi-dimensional Explanation Alignment for Medical Classification
Lijie Hu, Songning Lai, Wenshuo Chen, Hongru Xiao, Hongbin Lin, Lu Yu, Jingfeng Zhang, Di Wang