Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex AI models more transparent and understandable, addressing concerns about trust and accountability, particularly in high-stakes applications like healthcare and finance. Current research focuses on developing and evaluating various explanation methods, including those based on feature attribution (e.g., SHAP, LIME), prototype generation, and counterfactual examples, often applied to deep neural networks and other machine learning models. The ultimate goal is to improve the reliability and usability of AI systems by providing insights into their predictions and enhancing human-AI collaboration.
Papers
July 19, 2022
July 15, 2022
July 13, 2022
July 12, 2022
July 5, 2022
June 30, 2022
June 27, 2022
June 7, 2022
June 3, 2022
May 31, 2022
May 17, 2022
May 11, 2022
May 10, 2022
May 9, 2022
April 30, 2022
April 29, 2022
April 22, 2022
April 21, 2022