Large Scale Pre Trained Model
Large-scale pre-trained models (LPTMs) are foundational AI models trained on massive datasets, aiming to efficiently adapt to diverse downstream tasks with minimal additional training. Current research emphasizes parameter-efficient fine-tuning techniques, such as Low-Rank Adaptation (LoRA) and prompt-based methods, to reduce computational costs and improve generalization, particularly for high-resolution data and federated learning scenarios. These advancements are significantly impacting various fields, enabling progress in areas like medical image analysis, video understanding, and natural language processing by improving model efficiency and reducing the need for extensive labeled data.
Papers
March 12, 2024
February 1, 2024
January 24, 2024
December 14, 2023
December 11, 2023
November 29, 2023
November 12, 2023
October 31, 2023
October 28, 2023
October 8, 2023
September 27, 2023
September 11, 2023
August 29, 2023
August 21, 2023
July 10, 2023
June 27, 2023
June 18, 2023
June 15, 2023
June 9, 2023
May 17, 2023