Learnable Prompt

Learnable prompts are adaptable input sequences used to guide large pre-trained models, such as vision-language models (VLMs) and Segment Anything Models (SAMs), towards specific tasks without extensive model retraining. Current research focuses on improving the robustness and generalization of these prompts across diverse datasets and tasks, often employing techniques like prompt refinement, hybrid prompt architectures (static and dynamic), and knowledge distillation. This approach offers a parameter-efficient way to adapt powerful foundation models to various applications, including medical image analysis, anomaly detection, and few-shot learning, thereby enhancing their practicality and reducing the need for large labeled datasets.

Papers