Knowledge Conflict

Knowledge conflict in large language models (LLMs) arises from discrepancies between the model's internal knowledge and information provided in the context, or between multiple conflicting sources. Current research focuses on detecting and resolving these conflicts, employing techniques like contrastive decoding, adaptive decoding methods, and attention mechanism adjustments within various model architectures including LLMs and vision-language models. Understanding and mitigating knowledge conflicts is crucial for improving the reliability and trustworthiness of LLMs, particularly in applications requiring factual accuracy and robust reasoning under uncertainty.

Papers