Model Explanation
Model explanation, or explainable AI (XAI), aims to make the decision-making processes of complex machine learning models transparent and understandable. Current research focuses on developing and evaluating various explanation methods, including those based on feature importance (e.g., SHAP, LIME), prototypes, and neural pathways, often applied to deep learning models (e.g., CNNs, Vision Transformers) and large language models (LLMs). This field is crucial for building trust in AI systems, improving model development and debugging, and mitigating potential privacy risks associated with model transparency.
Papers
Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
Luis M. Lopez-Ramos, Florian Leiser, Aditya Rastogi, Steven Hicks, Inga Strümke, Vince I. Madai, Tobias Budig, Ali Sunyaev, Adam Hilbert
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends
Xin Zhang, Victor S. Sheng