Complex Prompt
Complex prompt engineering focuses on optimizing the input instructions given to large language models (LLMs) to elicit desired outputs, improving performance and control over these powerful tools. Current research explores various prompting techniques, including multi-step prompting, prefix-tuning, and reinforcement learning-based optimization, often applied to models like GPT and Llama series, to enhance LLM capabilities in diverse tasks such as text generation, image creation, and question answering. This field is significant because effective prompt engineering is crucial for unlocking the full potential of LLMs and mitigating their limitations, impacting various applications from software development to scientific research and beyond.
Papers
Sensitivity of Generative VLMs to Semantically and Lexically Altered Prompts
Sri Harsha Dumpala, Aman Jaiswal, Chandramouli Sastry, Evangelos Milios, Sageev Oore, Hassan Sajjad
When Not to Answer: Evaluating Prompts on GPT Models for Effective Abstention in Unanswerable Math Word Problems
Asir Saadat, Tasmia Binte Sogir, Md Taukir Azam Chowdhury, Syem Aziz