XAI Explanation

Explainable AI (XAI) aims to make the decision-making processes of complex machine learning models, such as deep learning architectures like ResNets, more transparent and understandable. Current research focuses on developing and evaluating various explanation methods, including counterfactual explanations, feature importance analysis, and techniques that leverage Shapley values, while also addressing the inherent trade-off between explanation fidelity and privacy. This work is crucial for building trust in AI systems across diverse applications, particularly in high-stakes domains like healthcare, by improving model interpretability and facilitating human-AI collaboration.

Papers