Entity Memory
Entity memory in language models focuses on how these models store and utilize information about specific entities (people, places, things) during text generation and processing. Current research investigates the extent of memorization, including unintended memorization of sensitive training data and its privacy implications, and explores methods to improve entity coherence and consistency in generated text, often employing encoder-decoder architectures augmented with dedicated entity memory components. These efforts are crucial for mitigating privacy risks associated with large language models and improving the factual accuracy and reliability of AI-generated content across various applications.
Papers
October 22, 2024
August 30, 2023
December 7, 2022
October 7, 2022