Parametric Knowledge
Parametric knowledge refers to the factual information implicitly encoded within the parameters of large language models (LLMs), contrasting with explicitly retrieved, non-parametric knowledge. Current research focuses on understanding how LLMs balance these knowledge sources during tasks like question answering, investigating the interplay between parametric and contextual information using techniques like causal mediation analysis and attention mechanisms. This research is crucial for improving LLM reliability and accuracy by addressing issues like hallucinations and knowledge conflicts, ultimately leading to more robust and trustworthy AI systems across various applications.
Papers
Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge
Genglin Liu, Xingyao Wang, Lifan Yuan, Yangyi Chen, Hao Peng
R-Tuning: Instructing Large Language Models to Say `I Don't Know'
Hanning Zhang, Shizhe Diao, Yong Lin, Yi R. Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, Tong Zhang
Crafting In-context Examples according to LMs' Parametric Knowledge
Yoonsang Lee, Pranav Atreya, Xi Ye, Eunsol Choi