Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
436papers
Papers - Page 14
January 22, 2024
Unveiling the Human-like Similarities of Automatic Facial Expression Recognition: An Empirical Exploration through Explainable AI
F. Xavier Gaya-Morey, Silvia Ramis-Guarinos, Cristina Manresa-Yee, Jose M. Buades-RubioCloud-based XAI Services for Assessing Open Repository Models Under Adversarial Attacks
Zerui Wang, Yan Liu
January 16, 2024
December 30, 2023
December 21, 2023
December 20, 2023
December 19, 2023
December 17, 2023
December 14, 2023
December 13, 2023
On Diagnostics for Understanding Agent Training Behaviour in Cooperative MARL
Wiem Khlifi, Siddarth Singh, Omayma Mahjoub, Ruan de Kock, Abidine Vall, Rihab Gorsane, Arnu PretoriusExplainable AI in Grassland Monitoring: Enhancing Model Performance and Domain Adaptability
Shanghua Liu, Anna Hedström, Deepak Hanike Basavegowda, Cornelia Weltzien, Marina M. -C. HöhnePrototypical Self-Explainable Models Without Re-training
Srishti Gautam, Ahcene Boubekki, Marina M. C. Höhne, Michael C. Kampffmeyer
December 12, 2023