Interpretable Image

Interpretable image classification aims to build machine learning models that not only accurately classify images but also provide understandable explanations for their decisions. Current research focuses on developing novel architectures, such as those employing concept bottlenecks, prototype-based methods (including variations using deformable or support prototypes), and neurosymbolic approaches, to enhance model transparency and improve the quality of explanations. These advancements are significant because they address the "black box" nature of many deep learning models, fostering trust and enabling more effective human-AI collaboration in various applications, including medical image analysis and fine-grained visual recognition.

Papers