XAI Model
Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex AI models transparent and understandable, addressing concerns about their "black box" nature, particularly in high-stakes applications like healthcare and finance. Current research emphasizes rigorous, model-based explanation methods, such as logic-based approaches and those leveraging feature attribution techniques (e.g., SHAP, LIME), with a focus on improving the accuracy, efficiency, and user-friendliness of explanations. The development and validation of robust XAI methods are crucial for building trust in AI systems and facilitating their responsible deployment across various scientific and practical domains.
Papers
August 30, 2024
June 4, 2024
May 14, 2024
April 15, 2024
April 10, 2024
December 19, 2023
November 29, 2023
November 7, 2023
August 11, 2023
July 15, 2023
April 10, 2023
April 4, 2023
March 3, 2023
February 16, 2023
February 4, 2023
November 2, 2022
October 6, 2022
September 2, 2022
June 7, 2022