Attention Heatmaps
Attention heatmaps visualize the focus of a model or human observer on specific input features, aiming to improve model interpretability and understanding of decision-making processes. Current research utilizes attention mechanisms within various deep learning architectures, including transformers and convolutional neural networks, to generate these heatmaps across diverse applications like medical image analysis (e.g., pathology, radiology) and text-to-image generation. This work is significant for enhancing the transparency and trustworthiness of complex models, facilitating improved diagnostics, and providing insights into human expert behavior for training and assessment purposes.
Papers
Decoding the visual attention of pathologists to reveal their level of expertise
Souradeep Chakraborty, Dana Perez, Paul Friedman, Natallia Sheuka, Constantin Friedman, Oksana Yaskiv, Rajarsi Gupta, Gregory J. Zelinsky, Joel H. Saltz, Dimitris Samaras
Joint chest X-ray diagnosis and clinical visual attention prediction with multi-stage cooperative learning: enhancing interpretability
Zirui Qiu, Hassan Rivaz, Yiming Xiao