Parametric Knowledge

Parametric knowledge refers to the factual information implicitly encoded within the parameters of large language models (LLMs), contrasting with explicitly retrieved, non-parametric knowledge. Current research focuses on understanding how LLMs balance these knowledge sources during tasks like question answering, investigating the interplay between parametric and contextual information using techniques like causal mediation analysis and attention mechanisms. This research is crucial for improving LLM reliability and accuracy by addressing issues like hallucinations and knowledge conflicts, ultimately leading to more robust and trustworthy AI systems across various applications.

Papers