Automatic Prompt Optimization
Automatic prompt optimization (APO) aims to automate the process of crafting effective prompts for large language models (LLMs), improving their performance on various tasks without requiring model retraining. Current research focuses on developing algorithms that iteratively refine prompts based on feedback, utilizing techniques like gradient-based optimization, reinforcement learning, and LLM-based self-reflection, often incorporating user-defined evaluation criteria or human feedback. APO's significance lies in its potential to significantly reduce the time and expertise needed for LLM deployment across diverse applications, from text summarization and image generation to clinical note creation and code search.
Papers
October 3, 2024
July 15, 2024
July 12, 2024
June 22, 2024
June 17, 2024
June 6, 2024
April 5, 2024
April 3, 2024
April 1, 2024
March 26, 2024
February 27, 2024
February 19, 2024
February 13, 2024
February 3, 2024
November 20, 2023
November 16, 2023
October 10, 2023
August 22, 2023
August 16, 2023