Learnable Prompt
Learnable prompts are adaptable input sequences used to guide large pre-trained models, such as vision-language models (VLMs) and Segment Anything Models (SAMs), towards specific tasks without extensive model retraining. Current research focuses on improving the robustness and generalization of these prompts across diverse datasets and tasks, often employing techniques like prompt refinement, hybrid prompt architectures (static and dynamic), and knowledge distillation. This approach offers a parameter-efficient way to adapt powerful foundation models to various applications, including medical image analysis, anomaly detection, and few-shot learning, thereby enhancing their practicality and reducing the need for large labeled datasets.
Papers
September 10, 2024
August 31, 2024
July 31, 2024
July 22, 2024
July 15, 2024
July 3, 2024
June 2, 2024
June 1, 2024
April 16, 2024
March 29, 2024
March 18, 2024
March 5, 2024
February 13, 2024
January 30, 2024
January 3, 2024
December 6, 2023
October 15, 2023
August 22, 2023
August 11, 2023