Memorization Capacity
Memorization capacity in machine learning investigates how efficiently neural networks can store and retrieve information, impacting both model performance and resource efficiency. Current research focuses on understanding this capacity in various architectures, including transformers and recurrent networks, analyzing the influence of factors like network depth, parameter sharing, and training methods (e.g., fine-tuning, data augmentation). These investigations are crucial for optimizing model design, improving generalization, and enabling efficient deployment of increasingly complex models on resource-constrained devices.
Papers
November 15, 2024
October 30, 2024
October 25, 2024
October 1, 2024
September 26, 2024
September 17, 2024
August 1, 2024
January 5, 2024
November 11, 2023
September 30, 2023
August 5, 2023
July 14, 2023
June 14, 2023
June 3, 2023
May 12, 2023
March 20, 2023
October 25, 2022