Gradient Path
Gradient path methods analyze the flow of gradients during the training of deep neural networks, aiming to improve model interpretability and performance. Current research focuses on refining path-based attribution methods like Integrated Gradients, addressing issues such as noise and baseline selection through iterative approaches and uncertainty quantification. These advancements enhance the explainability of complex models and contribute to more robust and efficient training, particularly for generative models and adversarial attacks. The resulting insights are valuable for both understanding model behavior and designing improved network architectures.
Papers
June 16, 2024
February 27, 2024
November 10, 2023
March 28, 2023
November 9, 2022
July 17, 2022