Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex AI models more transparent and understandable, addressing concerns about trust and accountability, particularly in high-stakes applications like healthcare and finance. Current research focuses on developing and evaluating various explanation methods, including those based on feature attribution (e.g., SHAP, LIME), prototype generation, and counterfactual examples, often applied to deep neural networks and other machine learning models. The ultimate goal is to improve the reliability and usability of AI systems by providing insights into their predictions and enhancing human-AI collaboration.
Papers
Evaluating Explanation Methods for Vision-and-Language Navigation
Guanqi Chen, Lei Yang, Guanhua Chen, Jia Pan
Proceedings of The first international workshop on eXplainable AI for the Arts (XAIxArts)
Nick Bryan-Kinns, Corey Ford, Alan Chamberlain, Steven David Benford, Helen Kennedy, Zijin Li, Wu Qiong, Gus G. Xia, Jeba Rezwana
Explainable Artificial Intelligence for Drug Discovery and Development -- A Comprehensive Survey
Roohallah Alizadehsani, Solomon Sunday Oyelere, Sadiq Hussain, Rene Ripardo Calixto, Victor Hugo C. de Albuquerque, Mohamad Roshanzamir, Mohamed Rahouti, Senthil Kumar Jagatheesaperumal
Quantifying Feature Importance of Games and Strategies via Shapley Values
Satoru Fujii