Non Parametric Memory

Non-parametric memory in large language models (LLMs) refers to augmenting the model's inherent parametric knowledge (stored in its weights) with external knowledge sources, improving access to a broader range of information. Current research focuses on Retrieval-Augmented Generation (RAG) models, which combine neural retrieval with generation, and explores how these models balance the use of parametric and non-parametric knowledge, often revealing a strong bias towards retrieved context. This area is significant because it addresses LLMs' limitations in handling specialized or less-frequent knowledge, leading to more robust and accurate systems for various applications, including question answering and robotics. Improving the efficiency and effectiveness of non-parametric memory integration is a key goal, with ongoing work exploring techniques like selective memorization and improved similarity search.

Papers