Large Scale Pre Trained Model
Large-scale pre-trained models (LPTMs) are foundational AI models trained on massive datasets, aiming to efficiently adapt to diverse downstream tasks with minimal additional training. Current research emphasizes parameter-efficient fine-tuning techniques, such as Low-Rank Adaptation (LoRA) and prompt-based methods, to reduce computational costs and improve generalization, particularly for high-resolution data and federated learning scenarios. These advancements are significantly impacting various fields, enabling progress in areas like medical image analysis, video understanding, and natural language processing by improving model efficiency and reducing the need for extensive labeled data.
Papers
November 27, 2024
November 1, 2024
October 28, 2024
October 16, 2024
October 13, 2024
September 22, 2024
August 29, 2024
August 27, 2024
August 5, 2024
July 19, 2024
July 18, 2024
July 4, 2024
June 9, 2024
May 29, 2024
May 27, 2024
May 15, 2024
May 7, 2024
May 3, 2024
April 23, 2024