Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex AI models more transparent and understandable, addressing concerns about trust and accountability, particularly in high-stakes applications like healthcare and finance. Current research focuses on developing and evaluating various explanation methods, including those based on feature attribution (e.g., SHAP, LIME), prototype generation, and counterfactual examples, often applied to deep neural networks and other machine learning models. The ultimate goal is to improve the reliability and usability of AI systems by providing insights into their predictions and enhancing human-AI collaboration.
Papers
Interpretable Rule-Based System for Radar-Based Gesture Sensing: Enhancing Transparency and Personalization in AI
Sarah Seifi, Tobias Sukianto, Cecilia Carbonelli, Lorenzo Servadei, Robert Wille
Developing Guidelines for Functionally-Grounded Evaluation of Explainable Artificial Intelligence using Tabular Data
Mythreyi Velmurugan, Chun Ouyang, Yue Xu, Renuka Sindhgatta, Bemali Wickramanayake, Catarina Moreira