Knowledge Conflict
Knowledge conflict in large language models (LLMs) arises from discrepancies between the model's internal knowledge and information provided in the context, or between multiple conflicting sources. Current research focuses on detecting and resolving these conflicts, employing techniques like contrastive decoding, adaptive decoding methods, and attention mechanism adjustments within various model architectures including LLMs and vision-language models. Understanding and mitigating knowledge conflicts is crucial for improving the reliability and trustworthiness of LLMs, particularly in applications requiring factual accuracy and robust reasoning under uncertainty.
Papers
Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning
Tianqing Fang, Zhaowei Wang, Wenxuan Zhou, Hongming Zhang, Yangqiu Song, Muhao Chen
A Causal View of Entity Bias in (Large) Language Models
Fei Wang, Wenjie Mo, Yiwei Wang, Wenxuan Zhou, Muhao Chen