Complex Prompt
Complex prompt engineering focuses on optimizing the input instructions given to large language models (LLMs) to elicit desired outputs, improving performance and control over these powerful tools. Current research explores various prompting techniques, including multi-step prompting, prefix-tuning, and reinforcement learning-based optimization, often applied to models like GPT and Llama series, to enhance LLM capabilities in diverse tasks such as text generation, image creation, and question answering. This field is significant because effective prompt engineering is crucial for unlocking the full potential of LLMs and mitigating their limitations, impacting various applications from software development to scientific research and beyond.
Papers
Few-shot Unified Question Answering: Tuning Models or Prompts?
Srijan Bansal, Semih Yavuz, Bo Pang, Meghana Bhat, Yingbo Zhou
Navigating Prompt Complexity for Zero-Shot Classification: A Study of Large Language Models in Computational Social Science
Yida Mu, Ben P. Wu, William Thorne, Ambrose Robinson, Nikolaos Aletras, Carolina Scarton, Kalina Bontcheva, Xingyi Song
LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models
Long Lian, Boyi Li, Adam Yala, Trevor Darrell