Interpretable Computer Vision

Interpretable computer vision aims to create machine learning models that not only accurately classify images but also provide understandable explanations for their decisions, addressing the "black box" nature of many deep learning systems. Current research focuses on developing model architectures, such as prototypical parts networks and transformer-based approaches, that generate human-interpretable visual explanations, often by identifying key image features or components contributing to the classification. This pursuit of transparency is crucial for building trust in AI systems, particularly in high-stakes applications like medical diagnosis and security, and facilitates more effective collaboration between humans and AI.

Papers