Paper ID: 2205.10900

Visual Explanations from Deep Networks via Riemann-Stieltjes Integrated Gradient-based Localization

Mirtha Lucas, Miguel Lerma, Jacob Furst, Daniela Raicu

Neural networks are becoming increasingly better at tasks that involve classifying and recognizing images. At the same time techniques intended to explain the network output have been proposed. One such technique is the Gradient-based Class Activation Map (Grad-CAM), which is able to locate features of an input image at various levels of a convolutional neural network (CNN), but is sensitive to the vanishing gradients problem. There are techniques such as Integrated Gradients (IG), that are not affected by that problem, but its use is limited to the input layer of a network. Here we introduce a new technique to produce visual explanations for the predictions of a CNN. Like Grad-CAM, our method can be applied to any layer of the network, and like Integrated Gradients it is not affected by the problem of vanishing gradients. For efficiency, gradient integration is performed numerically at the layer level using a Riemann-Stieltjes sum approximation. Compared to Grad-CAM, heatmaps produced by our algorithm are better focused in the areas of interest, and their numerical computation is more stable. Our code is available at https://github.com/mlerma54/RSIGradCAM

Submitted: May 22, 2022