Shot Prompt Tuning

Shot prompt tuning adapts pre-trained language models (PLMs) to new tasks using only a few examples, optimizing prompts rather than retraining the entire model. Current research focuses on improving robustness to noisy labels and out-of-distribution data, exploring techniques like decomposed prompt tuning and label-guided data augmentation, and employing various PLMs such as GPT and LLaMA. This approach offers significant advantages in resource-constrained scenarios, particularly for domain-specific tasks and low-resource languages, by achieving performance comparable to or exceeding fully-supervised fine-tuning methods while requiring minimal data and computational resources.

Papers