Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex AI models more transparent and understandable, addressing concerns about trust and accountability, particularly in high-stakes applications like healthcare and finance. Current research focuses on developing and evaluating various explanation methods, including those based on feature attribution (e.g., SHAP, LIME), prototype generation, and counterfactual examples, often applied to deep neural networks and other machine learning models. The ultimate goal is to improve the reliability and usability of AI systems by providing insights into their predictions and enhancing human-AI collaboration.
Papers
March 24, 2022
March 21, 2022
March 15, 2022
March 12, 2022
March 3, 2022
February 25, 2022
February 23, 2022
January 26, 2022
January 13, 2022
December 31, 2021
December 29, 2021
December 25, 2021
December 24, 2021
December 21, 2021
December 14, 2021
December 9, 2021
November 13, 2021