Prompt Tuning
Prompt tuning is a parameter-efficient fine-tuning technique for adapting large pre-trained models, such as vision-language models (VLMs) and large language models (LLMs), to specific downstream tasks by learning small sets of parameters (prompts) rather than retraining the entire model. Current research focuses on improving prompt design for various modalities (text, image, multimodal), enhancing calibration and robustness, and exploring applications across diverse fields including image segmentation, code repair, and recommendation systems. This approach offers significant advantages in terms of computational efficiency and reduced risk of overfitting, making it a valuable tool for adapting powerful foundation models to specialized tasks with limited data.
Papers
ArGue: Attribute-Guided Prompt Tuning for Vision-Language Models
Xinyu Tian, Shu Zou, Zhaoyuan Yang, Jing Zhang
Can Out-of-Domain data help to Learn Domain-Specific Prompts for Multimodal Misinformation Detection?
Amartya Bhattacharya, Debarshi Brahma, Suraj Nagaje Mahadev, Anmol Asati, Vikas Verma, Soma Biswas
Federated Learning of Large Language Models with Parameter-Efficient Prompt Tuning and Adaptive Optimization
Tianshi Che, Ji Liu, Yang Zhou, Jiaxiang Ren, Jiwen Zhou, Victor S. Sheng, Huaiyu Dai, Dejing Dou
Learning to Correct Noisy Labels for Fine-Grained Entity Typing via Co-Prediction Prompt Tuning
Minghao Tang, Yongquan He, Yongxiu Xu, Hongbo Xu, Wenyuan Zhang, Yang Lin