Large Pre Trained Model
Large pre-trained models (LPMs) are massive neural networks trained on enormous datasets, aiming to achieve strong generalization across diverse downstream tasks with minimal further training. Current research emphasizes efficient fine-tuning techniques, such as prompt engineering, low-rank adaptation (e.g., LoRA, SVFit), and sparse parameter updates, to reduce computational costs and improve model adaptability while addressing issues like overfitting and catastrophic forgetting. This field is significant due to LPMs' transformative impact on various applications, from natural language processing and computer vision to robotics and education, driving advancements in both theoretical understanding and practical deployment of AI systems.
Papers
September 14, 2024
September 10, 2024
September 9, 2024
August 20, 2024
August 6, 2024
August 5, 2024
August 1, 2024
July 24, 2024
July 22, 2024
July 16, 2024
July 15, 2024
July 13, 2024
July 10, 2024
June 25, 2024
May 25, 2024
May 7, 2024
May 3, 2024
April 27, 2024
April 23, 2024