Prompt Decomposition

Prompt decomposition involves breaking down complex prompts into smaller, more manageable components to improve the performance and efficiency of large language models (LLMs). Current research focuses on developing algorithms and model architectures that automatically decompose prompts, optimize their structure and content (e.g., incorporating counter-examples or leveraging contrastive learning), and select the most effective prompts for a given task. This approach enhances the accuracy and generalizability of LLMs across diverse applications, including text-to-image generation, question answering, and knowledge graph population, while also addressing challenges like bias mitigation and adversarial attacks.

Papers