Prompt Ensembling

Prompt ensembling is a technique that combines multiple prompts to improve the performance of large language models (LLMs) on various tasks, aiming to enhance accuracy, consistency, and robustness. Current research focuses on optimizing prompt design, including exploring different prompt structures and sequencing strategies, and employing ensemble methods to mitigate sensitivity to individual prompts, often leveraging models like GPT-4. This approach shows promise in improving LLM-based evaluations, enabling more effective zero-shot classification in text-image models, and advancing applications such as semantic parsing and legal argument reasoning.

Papers