Transformer Adapter
Transformer adapters are lightweight modules inserted into pre-trained vision transformers to enable efficient adaptation to new tasks or domains without retraining the entire model. Current research focuses on designing adapters optimized for specific visual tasks, such as semantic segmentation and action recognition, often incorporating attention mechanisms or convolutional layers to leverage existing inductive biases. This parameter-efficient approach significantly reduces computational costs and improves generalizability across diverse applications, including continual learning, few-shot learning, and multi-task learning, making large vision transformer models more accessible and practical.
Papers
November 1, 2024
February 26, 2024
November 27, 2023
August 23, 2023
March 13, 2023
November 18, 2022
July 14, 2022