Transformer Adapter

Transformer adapters are lightweight modules inserted into pre-trained vision transformers to enable efficient adaptation to new tasks or domains without retraining the entire model. Current research focuses on designing adapters optimized for specific visual tasks, such as semantic segmentation and action recognition, often incorporating attention mechanisms or convolutional layers to leverage existing inductive biases. This parameter-efficient approach significantly reduces computational costs and improves generalizability across diverse applications, including continual learning, few-shot learning, and multi-task learning, making large vision transformer models more accessible and practical.

Papers