Prompt Tuning
Prompt tuning is a parameter-efficient fine-tuning technique for adapting large pre-trained models, such as vision-language models (VLMs) and large language models (LLMs), to specific downstream tasks by learning small sets of parameters (prompts) rather than retraining the entire model. Current research focuses on improving prompt design for various modalities (text, image, multimodal), enhancing calibration and robustness, and exploring applications across diverse fields including image segmentation, code repair, and recommendation systems. This approach offers significant advantages in terms of computational efficiency and reduced risk of overfitting, making it a valuable tool for adapting powerful foundation models to specialized tasks with limited data.
Papers
Exploring Embedding Priors in Prompt-Tuning for Improved Interpretability and Control
Sergey Sedov, Sumanth Bharadwaj Hachalli Karanam, Venu Gopal Kadamba
Prompt Tuning for Item Cold-start Recommendation
Yuezihan Jiang, Gaode Chen, Wenhan Zhang, Jingchi Wang, Yinjie Jiang, Qi Zhang, Jingjian Lin, Peng Jiang, Kaigui Bian
Proactive Adversarial Defense: Harnessing Prompt Tuning in Vision-Language Models to Detect Unseen Backdoored Images
Kyle Stein, Andrew Arash Mahyari, Guillermo Francia, Eman El-Sheikh
Adapting Unsigned Graph Neural Networks for Signed Graphs: A Few-Shot Prompt Tuning Approach
Zian Zhai, Sima Qing, Xiaoyang Wang, Wenjie Zhang