Prompt Tuning
Prompt tuning is a parameter-efficient fine-tuning technique for adapting large pre-trained models, such as vision-language models (VLMs) and large language models (LLMs), to specific downstream tasks by learning small sets of parameters (prompts) rather than retraining the entire model. Current research focuses on improving prompt design for various modalities (text, image, multimodal), enhancing calibration and robustness, and exploring applications across diverse fields including image segmentation, code repair, and recommendation systems. This approach offers significant advantages in terms of computational efficiency and reduced risk of overfitting, making it a valuable tool for adapting powerful foundation models to specialized tasks with limited data.
Papers
LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning
Amirhossein Abaskohi, Sascha Rothe, Yadollah Yaghoobzadeh
ContrastNER: Contrastive-based Prompt Tuning for Few-shot NER
Amirhossein Layegh, Amir H. Payberah, Ahmet Soylu, Dumitru Roman, Mihhail Matskin
Deeply Coupled Cross-Modal Prompt Learning
Xuejing Liu, Wei Tang, Jinghui Lu, Rui Zhao, Zhaojun Guo, Fei Tan