Prompt Optimization
Prompt optimization aims to automatically generate effective prompts for large language models (LLMs), reducing the need for manual, time-consuming prompt engineering. Current research focuses on developing algorithms that learn from both positive and negative examples, using techniques like contrastive learning and reinforcement learning, often incorporating generative adversarial networks or other neural networks to refine prompts iteratively. This field is crucial for improving LLM performance across diverse tasks, from code generation and question answering to image synthesis and anomaly detection, ultimately enhancing the usability and reliability of these powerful models.
Papers
Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs
Aly M. Kassem, Omar Mahmoud, Niloofar Mireshghallah, Hyunwoo Kim, Yulia Tsvetkov, Yejin Choi, Sherif Saad, Santu Rana
Localized Zeroth-Order Prompt Optimization
Wenyang Hu, Yao Shu, Zongmin Yu, Zhaoxuan Wu, Xiangqiang Lin, Zhongxiang Dai, See-Kiong Ng, Bryan Kian Hsiang Low
FIPO: Free-form Instruction-oriented Prompt Optimization with Preference Dataset and Modular Fine-tuning Schema
Junru Lu, Siyu An, Min Zhang, Yulan He, Di Yin, Xing Sun
Stochastic Approximation with Delayed Updates: Finite-Time Rates under Markovian Sampling
Arman Adibi, Nicolo Dal Fabbro, Luca Schenato, Sanjeev Kulkarni, H. Vincent Poor, George J. Pappas, Hamed Hassani, Aritra Mitra