Unsupervised Pre Training
Unsupervised pre-training aims to leverage vast amounts of unlabeled data to learn robust feature representations before fine-tuning on specific downstream tasks, thereby improving model performance and reducing the need for labeled data. Current research focuses on developing effective pre-training strategies across diverse modalities (images, graphs, time series, audio), employing architectures like masked autoencoders, contrastive learning, and transformers, and exploring techniques such as language-vision prompting and hierarchical approaches. This approach is particularly valuable in domains with limited labeled data, such as medical imaging, rare event detection, and low-resource language processing, offering significant potential for advancing various scientific fields and real-world applications.
Papers
Revisiting the Transferability of Supervised Pretraining: an MLP Perspective
Yizhou Wang, Shixiang Tang, Feng Zhu, Lei Bai, Rui Zhao, Donglian Qi, Wanli Ouyang
Unleashing the Potential of Unsupervised Pre-Training with Intra-Identity Regularization for Person Re-Identification
Zizheng Yang, Xin Jin, Kecheng Zheng, Feng Zhao