Limited Memorization
Limited memorization in large language models (LLMs) and other generative AI, such as diffusion models and vision-language models, is a critical research area focusing on how these models unintentionally store and reproduce training data. Current research investigates memorization's extent across various architectures, analyzes its impact on model performance and generalization, and explores mitigation strategies including modifying training objectives and employing parameter-efficient fine-tuning. Understanding and controlling memorization is crucial for addressing privacy concerns, ensuring copyright compliance, and building more trustworthy and reliable AI systems.
Papers
November 15, 2024
November 11, 2024
October 31, 2024
October 30, 2024
October 29, 2024
October 25, 2024
October 24, 2024
October 18, 2024
October 3, 2024
September 26, 2024
September 21, 2024
September 18, 2024
September 6, 2024
August 21, 2024
July 30, 2024
July 11, 2024
July 3, 2024
July 1, 2024