Visual Explanation
Visual explanation aims to make the decision-making processes of complex machine learning models, particularly deep neural networks (DNNs), more transparent and understandable. Current research focuses on developing and improving techniques like Class Activation Maps (CAMs) and their variants, leveraging attention mechanisms in Vision Transformers (ViTs), and integrating multimodal approaches combining visual and textual explanations. This field is crucial for building trust in AI systems, enabling better model debugging and bias detection, and facilitating more effective human-computer interaction in diverse applications such as medical diagnosis and recommender systems.
Papers
April 28, 2023
April 18, 2023
April 13, 2023
April 6, 2023
March 26, 2023
February 14, 2023
January 12, 2023
November 23, 2022
November 11, 2022
September 22, 2022
September 15, 2022
August 18, 2022
August 7, 2022
July 26, 2022
July 17, 2022
July 5, 2022
June 30, 2022
June 27, 2022