Prompt Optimization
Prompt optimization aims to automatically generate effective prompts for large language models (LLMs), reducing the need for manual, time-consuming prompt engineering. Current research focuses on developing algorithms that learn from both positive and negative examples, using techniques like contrastive learning and reinforcement learning, often incorporating generative adversarial networks or other neural networks to refine prompts iteratively. This field is crucial for improving LLM performance across diverse tasks, from code generation and question answering to image synthesis and anomaly detection, ultimately enhancing the usability and reliability of these powerful models.
Papers
MultiPrompter: Cooperative Prompt Optimization with Multi-Agent Reinforcement Learning
Dong-Ki Kim, Sungryull Sohn, Lajanugen Logeswaran, Dongsub Shim, Honglak Lee
PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization
Xinyuan Wang, Chenxi Li, Zhen Wang, Fan Bai, Haotian Luo, Jiayou Zhang, Nebojsa Jojic, Eric P. Xing, Zhiting Hu