Gradient Based XAI
Gradient-based Explainable AI (XAI) aims to make the decisions of complex machine learning models, particularly deep neural networks, more transparent by analyzing their gradients. Current research focuses on improving the accuracy and robustness of gradient-based saliency maps, often employing techniques like attention mechanisms within transformer networks or novel loss functions that preserve key features during processes like denoising. This work is crucial for building trust in AI systems across diverse applications, from medical image analysis and remote sensing to climate modeling, by providing more reliable and interpretable explanations of model predictions.
Papers
May 18, 2024
April 23, 2024
October 31, 2023
October 3, 2023
August 6, 2023
May 17, 2023
March 1, 2023
November 27, 2022