Parameter Efficient Transfer Learning
Parameter-efficient transfer learning (PETL) focuses on adapting large pre-trained models to new tasks using minimal parameter updates, addressing the computational and storage burdens of full fine-tuning. Current research emphasizes techniques like adapters, prompt tuning, and various other lightweight modules applied to vision transformers (ViTs), diffusion models, and language models, often incorporating strategies to improve cross-modal transfer and multi-task learning. This approach is significant because it enables efficient deployment of powerful models on resource-constrained devices and facilitates rapid adaptation to diverse downstream tasks across various domains, including computer vision, natural language processing, and speech recognition.
Papers
Parameter Efficient Adaptation for Image Restoration with Heterogeneous Mixture-of-Experts
Hang Guo, Tao Dai, Yuanchao Bai, Bin Chen, Xudong Ren, Zexuan Zhu, Shu-Tao Xia
READ: Recurrent Adapter with Partial Video-Language Alignment for Parameter-Efficient Transfer Learning in Low-Resource Video-Language Modeling
Thong Nguyen, Xiaobao Wu, Xinshuai Dong, Khoi Le, Zhiyuan Hu, Cong-Duy Nguyen, See-Kiong Ng, Luu Anh Tuan