Factual Knowledge

Factual knowledge in large language models (LLMs) is a burgeoning research area focused on understanding how these models acquire, store, and utilize factual information, and how to improve their accuracy and reliability. Current research investigates the limitations of LLMs in learning and retaining factual knowledge, particularly concerning the over-reliance on word co-occurrence statistics rather than true factual associations, and explores methods to mitigate this bias through techniques like alternate preference optimization and knowledge editing. This work is crucial for building more trustworthy and reliable LLMs, with significant implications for various applications ranging from question answering and information retrieval to knowledge-based systems and beyond.

Papers