Complex Prompt
Complex prompt engineering focuses on optimizing the input instructions given to large language models (LLMs) to elicit desired outputs, improving performance and control over these powerful tools. Current research explores various prompting techniques, including multi-step prompting, prefix-tuning, and reinforcement learning-based optimization, often applied to models like GPT and Llama series, to enhance LLM capabilities in diverse tasks such as text generation, image creation, and question answering. This field is significant because effective prompt engineering is crucial for unlocking the full potential of LLMs and mitigating their limitations, impacting various applications from software development to scientific research and beyond.
Papers
PromptAD: Learning Prompts with only Normal Samples for Few-Shot Anomaly Detection
Xiaofan Li, Zhizhong Zhang, Xin Tan, Chengwei Chen, Yanyun Qu, Yuan Xie, Lizhuang Ma
Plug and Play with Prompts: A Prompt Tuning Approach for Controlling Text Generation
Rohan Deepak Ajwani, Zining Zhu, Jonathan Rose, Frank Rudzicz