Model Explanation
Model explanation, or explainable AI (XAI), aims to make the decision-making processes of complex machine learning models transparent and understandable. Current research focuses on developing and evaluating various explanation methods, including those based on feature importance (e.g., SHAP, LIME), prototypes, and neural pathways, often applied to deep learning models (e.g., CNNs, Vision Transformers) and large language models (LLMs). This field is crucial for building trust in AI systems, improving model development and debugging, and mitigating potential privacy risks associated with model transparency.
Papers
Explainability of Deep Learning-Based Plant Disease Classifiers Through Automated Concept Identification
Jihen Amara, Birgitta König-Ries, Sheeba Samuel
Label up: Learning Pulmonary Embolism Segmentation from Image Level Annotation through Model Explainability
Florin Condrea, Saikiran Rapaka, Marius Leordeanu
FaceX: Understanding Face Attribute Classifiers through Summary Model Explanations
Ioannis Sarridis, Christos Koutlis, Symeon Papadopoulos, Christos Diou
Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
Luis M. Lopez-Ramos, Florian Leiser, Aditya Rastogi, Steven Hicks, Inga Strümke, Vince I. Madai, Tobias Budig, Ali Sunyaev, Adam Hilbert
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends
Xin Zhang, Victor S. Sheng