Shot Prompting
Shot prompting, a technique for adapting large language models (LLMs) to specific tasks using a small number of examples, aims to improve efficiency and performance in various applications. Current research focuses on optimizing prompt design, including strategies like leveraging generated outputs as demonstrations within batches, incorporating rule-based reasoning, and carefully selecting or generating examples to mitigate issues like label bias and overcorrection. This approach is significant because it enhances LLMs' adaptability to diverse tasks, particularly in low-resource settings and scenarios with ambiguous or incomplete information, leading to improvements in areas such as question answering, code generation, and even video anomaly detection.
Papers
Improved Compositional Generalization by Generating Demonstrations for Meta-Learning
Sam Spilsbury, Pekka Marttinen, Alexander Ilin
Decomposed Prompting for Machine Translation Between Related Languages using Large Language Models
Ratish Puduppully, Anoop Kunchukuttan, Raj Dabre, Ai Ti Aw, Nancy F. Chen