Paper ID: 2503.06084 • Published Mar 8, 2025
Exploring Interpretability for Visual Prompt Tuning with Hierarchical Concepts
Yubin Wang, Xinyang Jiang, De Cheng, Xiangqian Zhao, Zilong Wang, Dongsheng Li, Cairong Zhao
Tongji University•Microsoft Research Asia•Xidian University
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Visual prompt tuning offers significant advantages for adapting pre-trained
visual foundation models to specific tasks. However, current research provides
limited insight into the interpretability of this approach, which is essential
for enhancing AI reliability and enabling AI-driven knowledge discovery. In
this paper, rather than learning abstract prompt embeddings, we propose the
first framework, named Interpretable Visual Prompt Tuning (IVPT), to explore
interpretability for visual prompts, by introducing hierarchical concept
prototypes. Specifically, visual prompts are linked to human-understandable
semantic concepts, represented as a set of category-agnostic prototypes, each
corresponding to a specific region of the image. Then, IVPT aggregates features
from these regions to generate interpretable prompts, which are structured
hierarchically to explain visual prompts at different granularities.
Comprehensive qualitative and quantitative evaluations on fine-grained
classification benchmarks show its superior interpretability and performance
over conventional visual prompt tuning methods and existing interpretable
methods.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.