Prompt Ensembling
Prompt ensembling is a technique that combines multiple prompts to improve the performance of large language models (LLMs) on various tasks, aiming to enhance accuracy, consistency, and robustness. Current research focuses on optimizing prompt design, including exploring different prompt structures and sequencing strategies, and employing ensemble methods to mitigate sensitivity to individual prompts, often leveraging models like GPT-4. This approach shows promise in improving LLM-based evaluations, enabling more effective zero-shot classification in text-image models, and advancing applications such as semantic parsing and legal argument reasoning.
Papers
June 14, 2024
April 2, 2024
March 25, 2024
March 21, 2024
October 12, 2023
October 2, 2023
February 13, 2023