Contrastive Decoding

Contrastive decoding is a technique used to improve the quality and reliability of text and code generation from large language models (LLMs) and vision-language models (VLMs) by contrasting the model's output with alternative, often "weaker," outputs or inputs. Current research focuses on mitigating issues like hallucinations (generating factually incorrect information), sycophancy (undue influence from leading prompts), and biases in model outputs, often employing methods that leverage multiple model outputs, augmented inputs, or uncertainty estimations to refine the decoding process. This approach holds significant promise for enhancing the trustworthiness and robustness of LLMs and VLMs across various applications, including question answering, code generation, and image captioning.

Papers