Prompt Optimization
Prompt optimization aims to automatically generate effective prompts for large language models (LLMs), reducing the need for manual, time-consuming prompt engineering. Current research focuses on developing algorithms that learn from both positive and negative examples, using techniques like contrastive learning and reinforcement learning, often incorporating generative adversarial networks or other neural networks to refine prompts iteratively. This field is crucial for improving LLM performance across diverse tasks, from code generation and question answering to image synthesis and anomaly detection, ultimately enhancing the usability and reliability of these powerful models.
Papers
TIPO: Text to Image with Text Presampling for Prompt Optimization
Shih-Ying Yeh, Sang-Hyun Park, Giyeong Oh, Min Song, Youngjae Yu
Efficient and Accurate Prompt Optimization: the Benefit of Memory in Exemplar-Guided Reflection
Cilin Yan, Jingyun Wang, Lin Zhang, Ruihui Zhao, Xiaopu Wu, Kai Xiong, Qingsong Liu, Guoliang Kang, Yangyang Kang
AMPO: Automatic Multi-Branched Prompt Optimization
Sheng Yang, Yurong Wu, Yan Gao, Zineng Zhou, Bin Benjamin Zhu, Xiaodi Sun, Jian-Guang Lou, Zhiming Ding, Anbang Hu, Yuan Fang, Yunsong Li, Junyan Chen, Linjun Yang
StraGo: Harnessing Strategic Guidance for Prompt Optimization
Yurong Wu, Yan Gao, Bin Benjamin Zhu, Zineng Zhou, Xiaodi Sun, Sheng Yang, Jian-Guang Lou, Zhiming Ding, Linjun Yang