Conditional Adapter
Conditional adapters are parameter-efficient methods for adapting large pre-trained models (like vision transformers and language models) to new tasks or domains without retraining the entire model. Research focuses on improving adapter design (e.g., low-rank adaptations, mixture-of-adapters), developing efficient training strategies (e.g., contrastive training, dynamic scaling), and optimizing inference speed through conditional computation. This approach offers significant advantages in reducing computational costs, memory requirements, and training time while maintaining or even improving performance on various tasks, impacting fields like natural language processing, computer vision, and speech synthesis.
Papers
August 16, 2024
May 13, 2024
April 11, 2024
March 3, 2024
October 17, 2023
October 10, 2023
April 11, 2023