Limited Memorization
Limited memorization in large language models (LLMs) and other generative AI, such as diffusion models and vision-language models, is a critical research area focusing on how these models unintentionally store and reproduce training data. Current research investigates memorization's extent across various architectures, analyzes its impact on model performance and generalization, and explores mitigation strategies including modifying training objectives and employing parameter-efficient fine-tuning. Understanding and controlling memorization is crucial for addressing privacy concerns, ensuring copyright compliance, and building more trustworthy and reliable AI systems.
Papers
April 21, 2023
April 17, 2023
February 27, 2023
February 25, 2023
December 7, 2022
December 6, 2022
October 31, 2022
October 21, 2022
August 17, 2022
June 30, 2022
June 21, 2022
June 11, 2022
May 29, 2022
May 25, 2022
May 22, 2022
May 20, 2022
March 23, 2022
March 15, 2022
February 15, 2022
January 29, 2022