Conditional Adapter
Conditional adapters are parameter-efficient methods for adapting large pre-trained models (like vision transformers and language models) to new tasks or domains without retraining the entire model. Research focuses on improving adapter design (e.g., low-rank adaptations, mixture-of-adapters), developing efficient training strategies (e.g., contrastive training, dynamic scaling), and optimizing inference speed through conditional computation. This approach offers significant advantages in reducing computational costs, memory requirements, and training time while maintaining or even improving performance on various tasks, impacting fields like natural language processing, computer vision, and speech synthesis.
Papers
Dynamic Adapter with Semantics Disentangling for Cross-lingual Cross-modal Retrieval
Rui Cai, Zhiyu Dong, Jianfeng Dong, Xun Wang
Efficient Fine-Tuning of Single-Cell Foundation Models Enables Zero-Shot Molecular Perturbation Prediction
Sepideh Maleki, Jan-Christian Huetter, Kangway V. Chuang, Gabriele Scalia, Tommaso Biancalani