Limited Memorization
Limited memorization in large language models (LLMs) and other generative AI, such as diffusion models and vision-language models, is a critical research area focusing on how these models unintentionally store and reproduce training data. Current research investigates memorization's extent across various architectures, analyzes its impact on model performance and generalization, and explores mitigation strategies including modifying training objectives and employing parameter-efficient fine-tuning. Understanding and controlling memorization is crucial for addressing privacy concerns, ensuring copyright compliance, and building more trustworthy and reliable AI systems.
Papers
November 11, 2023
October 24, 2023
October 19, 2023
October 18, 2023
October 12, 2023
October 11, 2023
October 10, 2023
October 4, 2023
September 30, 2023
September 18, 2023
August 2, 2023
July 18, 2023
July 11, 2023
June 28, 2023
May 25, 2023
May 23, 2023
May 15, 2023
May 8, 2023