Visual Explanation
Visual explanation aims to make the decision-making processes of complex machine learning models, particularly deep neural networks (DNNs), more transparent and understandable. Current research focuses on developing and improving techniques like Class Activation Maps (CAMs) and their variants, leveraging attention mechanisms in Vision Transformers (ViTs), and integrating multimodal approaches combining visual and textual explanations. This field is crucial for building trust in AI systems, enabling better model debugging and bias detection, and facilitating more effective human-computer interaction in diverse applications such as medical diagnosis and recommender systems.
Papers
October 16, 2024
October 14, 2024
September 30, 2024
August 23, 2024
July 9, 2024
July 2, 2024
June 26, 2024
June 7, 2024
May 24, 2024
May 20, 2024
May 16, 2024
May 6, 2024
April 30, 2024
April 23, 2024
April 15, 2024
March 27, 2024
February 7, 2024
December 28, 2023
December 7, 2023
December 4, 2023