Limited Memorization
Limited memorization in large language models (LLMs) and other generative AI, such as diffusion models and vision-language models, is a critical research area focusing on how these models unintentionally store and reproduce training data. Current research investigates memorization's extent across various architectures, analyzes its impact on model performance and generalization, and explores mitigation strategies including modifying training objectives and employing parameter-efficient fine-tuning. Understanding and controlling memorization is crucial for addressing privacy concerns, ensuring copyright compliance, and building more trustworthy and reliable AI systems.
Papers
May 29, 2024
May 28, 2024
May 19, 2024
May 9, 2024
May 6, 2024
April 9, 2024
April 1, 2024
March 12, 2024
March 11, 2024
March 5, 2024
March 1, 2024
February 28, 2024
February 24, 2024
February 14, 2024
February 9, 2024
February 3, 2024
January 26, 2024
January 19, 2024
December 7, 2023