DNN Interpretation
DNN interpretation aims to understand the internal workings of deep neural networks, moving beyond their "black box" nature to enhance trust and facilitate debugging. Current research focuses on developing methods to visualize and quantify the contribution of individual neurons, layers, or image regions to model predictions, employing techniques like class activation maps and analyzing activation patterns. These efforts are crucial for improving model reliability, identifying biases, and ultimately building more trustworthy and explainable AI systems across various applications, including healthcare and image analysis. Furthermore, research is actively addressing the scalability challenges of formally verifying DNN behavior.
Papers
April 3, 2024
January 17, 2024
June 29, 2023
July 27, 2022
March 30, 2022