Soft Prompt
Soft prompting is a parameter-efficient technique for adapting large language models (LLMs) to specific tasks by learning short sequences of "soft prompts" that are prepended to the input text, guiding the model's output without modifying its core weights. Current research focuses on improving the efficiency and effectiveness of soft prompt learning, including methods for generating dynamic prompts, transferring prompts between tasks, and mitigating vulnerabilities to adversarial attacks. This approach offers significant advantages in resource-constrained settings and for enhancing model robustness and safety, impacting both the efficiency of LLM deployment and the development of more secure and reliable AI systems.
Papers
Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Cicero Nogueira dos Santos, Zhe Dong, Daniel Cer, John Nham, Siamak Shakeri, Jianmo Ni, Yun-hsuan Sung
XPrompt: Exploring the Extreme of Prompt Tuning
Fang Ma, Chen Zhang, Lei Ren, Jingang Wang, Qifan Wang, Wei Wu, Xiaojun Quan, Dawei Song