Visual Interpretation
Visual interpretation aims to make the decision-making processes of complex machine learning models, particularly in computer vision, more transparent and understandable to humans. Current research focuses on developing novel algorithms and model architectures, such as those based on class activation maps (CAMs), neural additive models (NAMs), and multi-agent frameworks, to generate more accurate and informative visual explanations. This work is crucial for building trust in AI systems, improving model debugging and design, and enabling effective human-computer collaboration in diverse applications ranging from medical diagnosis to autonomous systems.
Papers
June 22, 2024
May 31, 2024
May 17, 2024
April 3, 2024
March 1, 2024
November 14, 2023
March 1, 2023
February 3, 2023
January 27, 2023
September 15, 2022
May 22, 2022
March 14, 2022
December 6, 2021