Shot Prompt Tuning
Shot prompt tuning adapts pre-trained language models (PLMs) to new tasks using only a few examples, optimizing prompts rather than retraining the entire model. Current research focuses on improving robustness to noisy labels and out-of-distribution data, exploring techniques like decomposed prompt tuning and label-guided data augmentation, and employing various PLMs such as GPT and LLaMA. This approach offers significant advantages in resource-constrained scenarios, particularly for domain-specific tasks and low-resource languages, by achieving performance comparable to or exceeding fully-supervised fine-tuning methods while requiring minimal data and computational resources.
Papers
October 8, 2024
July 10, 2024
June 1, 2024
December 16, 2023
November 21, 2023
July 22, 2023
November 15, 2022
October 23, 2022
May 18, 2022