Interpretable Computer Vision
Interpretable computer vision aims to create machine learning models that not only accurately classify images but also provide understandable explanations for their decisions, addressing the "black box" nature of many deep learning systems. Current research focuses on developing model architectures, such as prototypical parts networks and transformer-based approaches, that generate human-interpretable visual explanations, often by identifying key image features or components contributing to the classification. This pursuit of transparency is crucial for building trust in AI systems, particularly in high-stakes applications like medical diagnosis and security, and facilitates more effective collaboration between humans and AI.
Papers
July 4, 2024
June 10, 2024
May 23, 2024
March 7, 2024
February 23, 2024
July 4, 2023
February 17, 2022
December 6, 2021