Prompt Regularization
Prompt regularization is a technique used to improve the performance and generalization of large pre-trained models, particularly in vision-language tasks, by incorporating prompts as a form of regularization during fine-tuning. Current research focuses on applying this technique to various challenges, including few-shot learning, anomaly detection, and class incremental learning, often leveraging models like CLIP and Segment Anything. This approach aims to mitigate overfitting to limited downstream datasets while preserving the valuable knowledge embedded within the pre-trained models, leading to more robust and efficient models for diverse applications.
Papers
April 1, 2024
May 18, 2023
March 8, 2023
January 29, 2023