Complex Prompt
Complex prompt engineering focuses on optimizing the input instructions given to large language models (LLMs) to elicit desired outputs, improving performance and control over these powerful tools. Current research explores various prompting techniques, including multi-step prompting, prefix-tuning, and reinforcement learning-based optimization, often applied to models like GPT and Llama series, to enhance LLM capabilities in diverse tasks such as text generation, image creation, and question answering. This field is significant because effective prompt engineering is crucial for unlocking the full potential of LLMs and mitigating their limitations, impacting various applications from software development to scientific research and beyond.
Papers
SeGA: Preference-Aware Self-Contrastive Learning with Prompts for Anomalous User Detection on Twitter
Ying-Ying Chang, Wei-Yao Wang, Wen-Chih Peng
Do LLMs Work on Charts? Designing Few-Shot Prompts for Chart Question Answering and Summarization
Xuan Long Do, Mohammad Hassanpour, Ahmed Masry, Parsa Kavehzadeh, Enamul Hoque, Shafiq Joty
EAFP-Med: An Efficient Adaptive Feature Processing Module Based on Prompts for Medical Image Detection
Xiang Li, Long Lan, Husam Lahza, Shaowu Yang, Shuihua Wang, Wenjing Yang, Hengzhu Liu, Yudong Zhang
A Comparative and Experimental Study on Automatic Question Answering Systems and its Robustness against Word Jumbling
Shashidhar Reddy Javaji, Haoran Hu, Sai Sameer Vennam, Vijaya Gajanan Buddhavarapu
When "A Helpful Assistant" Is Not Really Helpful: Personas in System Prompts Do Not Improve Performances of Large Language Models
Mingqian Zheng, Jiaxin Pei, Lajanugen Logeswaran, Moontae Lee, David Jurgens
Automatic Engineering of Long Prompts
Cho-Jui Hsieh, Si Si, Felix X. Yu, Inderjit S. Dhillon
Strings from the Library of Babel: Random Sampling as a Strong Baseline for Prompt Optimisation
Yao Lu, Jiayi Wang, Raphael Tang, Sebastian Riedel, Pontus Stenetorp