Pre Trained Backbone
Pre-trained backbones are foundational neural network architectures, pre-trained on massive datasets, that serve as efficient starting points for various downstream tasks. Current research focuses on improving their adaptability to new tasks with limited data, addressing issues like distribution shifts and catastrophic forgetting through techniques such as adapter modules, content-style decomposition, and parameter-efficient fine-tuning methods like prompt tuning and target parameter pre-training. This work is significant because it enhances the efficiency and generalizability of deep learning models across diverse applications, from medical image analysis and autonomous driving to natural language processing and industrial inspection, reducing the need for extensive task-specific training data.
Papers
LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning
Yi-Lin Sung, Jaemin Cho, Mohit Bansal
Singular Value Fine-tuning: Few-shot Segmentation requires Few-parameters Fine-tuning
Yanpeng Sun, Qiang Chen, Xiangyu He, Jian Wang, Haocheng Feng, Junyu Han, Errui Ding, Jian Cheng, Zechao Li, Jingdong Wang