Shot Prompting
Shot prompting, a technique for adapting large language models (LLMs) to specific tasks using a small number of examples, aims to improve efficiency and performance in various applications. Current research focuses on optimizing prompt design, including strategies like leveraging generated outputs as demonstrations within batches, incorporating rule-based reasoning, and carefully selecting or generating examples to mitigate issues like label bias and overcorrection. This approach is significant because it enhances LLMs' adaptability to diverse tasks, particularly in low-resource settings and scenarios with ambiguous or incomplete information, leading to improvements in areas such as question answering, code generation, and even video anomaly detection.
Papers
TACT: Advancing Complex Aggregative Reasoning with Information Extraction Tools
Avi Caciularu, Alon Jacovi, Eyal Ben-David, Sasha Goldshtein, Tal Schuster, Jonathan Herzig, Gal Elidan, Amir Globerson
From Tarzan to Tolkien: Controlling the Language Proficiency Level of LLMs for Content Generation
Ali Malik, Stephen Mayhew, Chris Piech, Klinton Bicknell