Prompt Tuning
Prompt tuning is a parameter-efficient fine-tuning technique for adapting large pre-trained models, such as vision-language models (VLMs) and large language models (LLMs), to specific downstream tasks by learning small sets of parameters (prompts) rather than retraining the entire model. Current research focuses on improving prompt design for various modalities (text, image, multimodal), enhancing calibration and robustness, and exploring applications across diverse fields including image segmentation, code repair, and recommendation systems. This approach offers significant advantages in terms of computational efficiency and reduced risk of overfitting, making it a valuable tool for adapting powerful foundation models to specialized tasks with limited data.
Papers
Attention Prompt Tuning: Parameter-efficient Adaptation of Pre-trained Models for Spatiotemporal Modeling
Wele Gedara Chaminda Bandara, Vishal M. Patel
Semantic Residual Prompts for Continual Learning
Martin Menabue, Emanuele Frascaroli, Matteo Boschini, Enver Sangineto, Lorenzo Bonicelli, Angelo Porrello, Simone Calderara
DeMPT: Decoding-enhanced Multi-phase Prompt Tuning for Making LLMs Be Better Context-aware Translators
Xinglin Lyu, Junhui Li, Yanqing Zhao, Min Zhang, Daimeng Wei, Shimin Tao, Hao Yang, Min Zhang
Infusing Hierarchical Guidance into Prompt Tuning: A Parameter-Efficient Framework for Multi-level Implicit Discourse Relation Recognition
Haodong Zhao, Ruifang He, Mengnan Xiao, Jing Xu