Contrastive Prompt

Contrastive prompting is a technique that enhances large language model (LLM) performance by training or guiding the model using pairs of contrasting prompts—one correct and one incorrect, or one representing a desired outcome and another representing an undesired one. Current research focuses on applying this approach to improve various aspects of LLMs, including reasoning abilities, multi-objective alignment, backdoor detection, and continual learning, often leveraging contrastive learning frameworks and prompt-tuning methods. This technique offers a powerful and efficient way to improve LLM capabilities across diverse tasks, reducing the need for extensive retraining or manual prompt engineering, and leading to more robust and adaptable AI systems.

Papers