Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex AI models more transparent and understandable, addressing concerns about trust and accountability, particularly in high-stakes applications like healthcare and finance. Current research focuses on developing and evaluating various explanation methods, including those based on feature attribution (e.g., SHAP, LIME), prototype generation, and counterfactual examples, often applied to deep neural networks and other machine learning models. The ultimate goal is to improve the reliability and usability of AI systems by providing insights into their predictions and enhancing human-AI collaboration.
Papers
April 17, 2024
April 16, 2024
April 12, 2024
April 5, 2024
April 4, 2024
March 28, 2024
March 26, 2024
March 25, 2024
March 19, 2024
March 18, 2024
March 7, 2024
February 28, 2024
February 21, 2024
February 19, 2024
February 8, 2024
February 7, 2024
February 4, 2024
February 1, 2024