Memorized Knowledge
Memorized knowledge in large language models (LLMs) and other machine learning systems is a significant area of research focusing on understanding how models store and retrieve information from their training data, and how this impacts their performance and reliability. Current research investigates the mechanisms of memorization across different model architectures, including transformers, and explores techniques to control and mitigate the effects of memorization, such as prompt tuning and novel loss functions. This research is crucial for improving the accuracy, trustworthiness, and ethical implications of LLMs, particularly concerning issues of data privacy and bias stemming from memorized content.
Papers
November 6, 2024
August 6, 2024
June 17, 2024
May 19, 2024
March 28, 2024
February 22, 2024
February 16, 2024
November 15, 2023
May 19, 2023
April 28, 2023
April 17, 2023
November 9, 2022
October 17, 2022
October 7, 2022