Large Scale Pre Trained Model
Large-scale pre-trained models (LPTMs) are foundational AI models trained on massive datasets, aiming to efficiently adapt to diverse downstream tasks with minimal additional training. Current research emphasizes parameter-efficient fine-tuning techniques, such as Low-Rank Adaptation (LoRA) and prompt-based methods, to reduce computational costs and improve generalization, particularly for high-resolution data and federated learning scenarios. These advancements are significantly impacting various fields, enabling progress in areas like medical image analysis, video understanding, and natural language processing by improving model efficiency and reducing the need for extensive labeled data.
Papers
April 8, 2023
March 28, 2023
March 26, 2023
March 14, 2023
March 6, 2023
March 5, 2023
February 16, 2023
February 3, 2023
January 27, 2023
November 7, 2022
November 3, 2022
October 17, 2022
April 17, 2022