Limited Memorization
Limited memorization in large language models (LLMs) and other generative AI, such as diffusion models and vision-language models, is a critical research area focusing on how these models unintentionally store and reproduce training data. Current research investigates memorization's extent across various architectures, analyzes its impact on model performance and generalization, and explores mitigation strategies including modifying training objectives and employing parameter-efficient fine-tuning. Understanding and controlling memorization is crucial for addressing privacy concerns, ensuring copyright compliance, and building more trustworthy and reliable AI systems.
Papers
January 9, 2025
December 30, 2024
December 24, 2024
December 18, 2024
December 12, 2024
December 10, 2024
December 5, 2024
December 3, 2024
December 2, 2024
November 27, 2024
November 26, 2024
November 23, 2024
November 15, 2024
November 11, 2024
October 31, 2024
October 30, 2024
October 29, 2024
October 25, 2024
October 24, 2024