Large Pre Trained Model
Large pre-trained models (LPMs) are massive neural networks trained on enormous datasets, aiming to achieve strong generalization across diverse downstream tasks with minimal further training. Current research emphasizes efficient fine-tuning techniques, such as prompt engineering, low-rank adaptation (e.g., LoRA, SVFit), and sparse parameter updates, to reduce computational costs and improve model adaptability while addressing issues like overfitting and catastrophic forgetting. This field is significant due to LPMs' transformative impact on various applications, from natural language processing and computer vision to robotics and education, driving advancements in both theoretical understanding and practical deployment of AI systems.
Papers
PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models
Tiantian Feng, Shrikanth Narayanan
Image Clustering via the Principle of Rate Reduction in the Age of Pretrained Models
Tianzhe Chu, Shengbang Tong, Tianjiao Ding, Xili Dai, Benjamin David Haeffele, René Vidal, Yi Ma