Large Pre Trained Model
Large pre-trained models (LPMs) are massive neural networks trained on enormous datasets, aiming to achieve strong generalization across diverse downstream tasks with minimal further training. Current research emphasizes efficient fine-tuning techniques, such as prompt engineering, low-rank adaptation (e.g., LoRA, SVFit), and sparse parameter updates, to reduce computational costs and improve model adaptability while addressing issues like overfitting and catastrophic forgetting. This field is significant due to LPMs' transformative impact on various applications, from natural language processing and computer vision to robotics and education, driving advancements in both theoretical understanding and practical deployment of AI systems.
Papers
November 28, 2022
November 3, 2022
October 26, 2022
October 20, 2022
October 17, 2022
October 13, 2022
October 12, 2022
October 10, 2022
October 6, 2022
October 4, 2022
August 23, 2022
August 17, 2022
July 24, 2022
July 15, 2022
July 10, 2022
June 28, 2022
June 13, 2022
June 9, 2022
June 1, 2022
May 22, 2022