Prompt Decomposition
Prompt decomposition involves breaking down complex prompts into smaller, more manageable components to improve the performance and efficiency of large language models (LLMs). Current research focuses on developing algorithms and model architectures that automatically decompose prompts, optimize their structure and content (e.g., incorporating counter-examples or leveraging contrastive learning), and select the most effective prompts for a given task. This approach enhances the accuracy and generalizability of LLMs across diverse applications, including text-to-image generation, question answering, and knowledge graph population, while also addressing challenges like bias mitigation and adversarial attacks.
Papers
June 15, 2024
June 4, 2024
May 13, 2024
May 6, 2024
April 25, 2024
April 3, 2024
March 21, 2024
February 25, 2024
November 28, 2023
November 25, 2023
November 20, 2023
November 7, 2023
October 21, 2023
September 21, 2023
May 4, 2023
April 10, 2023
February 20, 2023
February 17, 2023