Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex AI models more transparent and understandable, addressing concerns about trust and accountability, particularly in high-stakes applications like healthcare and finance. Current research focuses on developing and evaluating various explanation methods, including those based on feature attribution (e.g., SHAP, LIME), prototype generation, and counterfactual examples, often applied to deep neural networks and other machine learning models. The ultimate goal is to improve the reliability and usability of AI systems by providing insights into their predictions and enhancing human-AI collaboration.
Papers
Eclectic Rule Extraction for Explainability of Deep Neural Network based Intrusion Detection Systems
Jesse Ables, Nathaniel Childers, William Anderson, Sudip Mittal, Shahram Rahimi, Ioana Banicescu, Maria Seale
Enhancing the Fairness and Performance of Edge Cameras with Explainable AI
Truong Thanh Hung Nguyen, Vo Thanh Khang Nguyen, Quoc Hung Cao, Van Binh Truong, Quoc Khanh Nguyen, Hung Cao
Toward enriched Cognitive Learning with XAI
Muhammad Suffian, Ulrike Kuhl, Jose M. Alonso-Moral, Alessandro Bogliolo
Locally-Minimal Probabilistic Explanations
Yacine Izza, Kuldeep S. Meel, Joao Marques-Silva
CAManim: Animating end-to-end network activation maps
Emily Kaczmarek, Olivier X. Miguel, Alexa C. Bowie, Robin Ducharme, Alysha L. J. Dingwall-Harvey, Steven Hawken, Christine M. Armour, Mark C. Walker, Kevin Dick
Explain To Decide: A Human-Centric Review on the Role of Explainable Artificial Intelligence in AI-assisted Decision Making
Milad Rogha
XAI meets Biology: A Comprehensive Review of Explainable AI in Bioinformatics Applications
Zhongliang Zhou, Mengxuan Hu, Mariah Salcedo, Nathan Gravel, Wayland Yeung, Aarya Venkat, Dongliang Guo, Jielu Zhang, Natarajan Kannan, Sheng Li