Pre Trained Backbone
Pre-trained backbones are foundational neural network architectures, pre-trained on massive datasets, that serve as efficient starting points for various downstream tasks. Current research focuses on improving their adaptability to new tasks with limited data, addressing issues like distribution shifts and catastrophic forgetting through techniques such as adapter modules, content-style decomposition, and parameter-efficient fine-tuning methods like prompt tuning and target parameter pre-training. This work is significant because it enhances the efficiency and generalizability of deep learning models across diverse applications, from medical image analysis and autonomous driving to natural language processing and industrial inspection, reducing the need for extensive task-specific training data.
Papers
Semi-Supervised Fine-Tuning of Vision Foundation Models with Content-Style Decomposition
Mariia Drozdova, Vitaliy Kinakh, Yury Belousov, Erica Lastufka, Slava Voloshynovskiy
Finetuning Pre-trained Model with Limited Data for LiDAR-based 3D Object Detection by Bridging Domain Gaps
Jiyun Jang, Mincheol Chang, Jongwon Park, Jinkyu Kim