Gradient Based Explanation
Gradient-based explanation methods aim to understand the decision-making processes of machine learning models, particularly deep learning models, by analyzing gradients of the model's output with respect to its input features. Current research focuses on improving the robustness and applicability of these methods, including developing techniques for black-box models and addressing issues like bias and fairness in model outputs. These advancements are crucial for building trust in AI systems and ensuring their responsible deployment across various applications, from image analysis and natural language processing to improving the fairness of predictive models.
Papers
February 13, 2024
October 24, 2023
August 18, 2023
August 16, 2023
May 20, 2023
April 20, 2023
February 27, 2023
December 16, 2022