Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
July 11, 2022
July 9, 2022
July 5, 2022
June 30, 2022
June 26, 2022
June 22, 2022
June 20, 2022
June 14, 2022
June 8, 2022
June 6, 2022
June 3, 2022
June 1, 2022
May 27, 2022
May 25, 2022
May 23, 2022
May 22, 2022
May 17, 2022
May 6, 2022
May 3, 2022