XAI Model

Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex AI models transparent and understandable, addressing concerns about their "black box" nature, particularly in high-stakes applications like healthcare and finance. Current research emphasizes rigorous, model-based explanation methods, such as logic-based approaches and those leveraging feature attribution techniques (e.g., SHAP, LIME), with a focus on improving the accuracy, efficiency, and user-friendliness of explanations. The development and validation of robust XAI methods are crucial for building trust in AI systems and facilitating their responsible deployment across various scientific and practical domains.

Papers