Agnostic Saliency
Agnostic saliency methods aim to explain the internal workings of deep learning models, particularly in image recognition, without relying on specific input examples or pre-defined classes. Current research focuses on developing techniques that leverage attention mechanisms, multi-modal embeddings (like CLIP), and influence-based approaches to generate saliency maps revealing which parts of an input are most influential to a model's decision, regardless of the specific task or class. This work is crucial for improving the transparency and trustworthiness of AI systems, particularly in high-stakes applications like medical image analysis, where understanding model decisions is paramount.
Papers
May 23, 2024
April 23, 2024
April 5, 2024
October 6, 2023