Visual Explanation
Visual explanation aims to make the decision-making processes of complex machine learning models, particularly deep neural networks (DNNs), more transparent and understandable. Current research focuses on developing and improving techniques like Class Activation Maps (CAMs) and their variants, leveraging attention mechanisms in Vision Transformers (ViTs), and integrating multimodal approaches combining visual and textual explanations. This field is crucial for building trust in AI systems, enabling better model debugging and bias detection, and facilitating more effective human-computer interaction in diverse applications such as medical diagnosis and recommender systems.
Papers
Is visual explanation with Grad-CAM more reliable for deeper neural networks? a case study with automatic pneumothorax diagnosis
Zirui Qiu, Hassan Rivaz, Yiming Xiao
WSAM: Visual Explanations from Style Augmentation as Adversarial Attacker and Their Influence in Image Classification
Felipe Moreno-Vera, Edgar Medina, Jorge Poco
Using generative AI to investigate medical imagery models and datasets
Oran Lang, Doron Yaya-Stupp, Ilana Traynis, Heather Cole-Lewis, Chloe R. Bennett, Courtney Lyles, Charles Lau, Christopher Semturs, Dale R. Webster, Greg S. Corrado, Avinatan Hassidim, Yossi Matias, Yun Liu, Naama Hammel, Boris Babenko
Discriminative Deep Feature Visualization for Explainable Face Recognition
Zewei Xu, Yuhang Lu, Touradj Ebrahimi