Gradient Based XAI

Gradient-based Explainable AI (XAI) aims to make the decisions of complex machine learning models, particularly deep neural networks, more transparent by analyzing their gradients. Current research focuses on improving the accuracy and robustness of gradient-based saliency maps, often employing techniques like attention mechanisms within transformer networks or novel loss functions that preserve key features during processes like denoising. This work is crucial for building trust in AI systems across diverse applications, from medical image analysis and remote sensing to climate modeling, by providing more reliable and interpretable explanations of model predictions.

Papers