Interpretable Image Recognition
Interpretable image recognition aims to build image classification systems that not only achieve high accuracy but also provide understandable explanations for their predictions. Current research focuses on developing novel architectures, such as attention-based pooling mechanisms and prototype-based networks, that enhance model transparency by highlighting relevant image features or generating descriptive attributes. These advancements are significant because they address the "black box" nature of many deep learning models, improving trust and facilitating debugging, with applications ranging from forensic biometrics to improved human-computer interaction in areas like sign language recognition.
Papers
April 23, 2024
March 11, 2024
August 7, 2023
October 15, 2022
August 22, 2022
May 22, 2022