Prompt Optimization
Prompt optimization aims to automatically generate effective prompts for large language models (LLMs), reducing the need for manual, time-consuming prompt engineering. Current research focuses on developing algorithms that learn from both positive and negative examples, using techniques like contrastive learning and reinforcement learning, often incorporating generative adversarial networks or other neural networks to refine prompts iteratively. This field is crucial for improving LLM performance across diverse tasks, from code generation and question answering to image synthesis and anomaly detection, ultimately enhancing the usability and reliability of these powerful models.
Papers
Eliciting Causal Abilities in Large Language Models for Reasoning Tasks
Yajing Wang, Zongwei Luo, Jingzhe Wang, Zhanke Zhou, Yongqiang Chen, Bo Han
A Comparative Study of DSPy Teleprompter Algorithms for Aligning Large Language Models Evaluation Metrics to Human Evaluation
Bhaskarjit Sarmah, Kriti Dutta, Anna Grigoryan, Sachin Tiwari, Stefano Pasquali, Dhagash Mehta
Improving LLM Group Fairness on Tabular Data via In-Context Learning
Valeriia Cherepanova, Chia-Jung Lee, Nil-Jana Akpinar, Riccardo Fogliato, Martin Andres Bertran, Michael Kearns, James Zou
Evolutionary Pre-Prompt Optimization for Mathematical Reasoning
Mathurin Videau, Alessandro Leite, Marc Schoenauer, Olivier Teytaud
Safeguarding Text-to-Image Generation via Inference-Time Prompt-Noise Optimization
Jiangweizhi Peng, Zhiwei Tang, Gaowen Liu, Charles Fleming, Mingyi Hong