Memorization Capacity
Memorization capacity in machine learning investigates how efficiently neural networks can store and retrieve information, impacting both model performance and resource efficiency. Current research focuses on understanding this capacity in various architectures, including transformers and recurrent networks, analyzing the influence of factors like network depth, parameter sharing, and training methods (e.g., fine-tuning, data augmentation). These investigations are crucial for optimizing model design, improving generalization, and enabling efficient deployment of increasingly complex models on resource-constrained devices.
15papers
Papers
February 16, 2025
November 15, 2024
October 1, 2024
November 11, 2023
September 30, 2023
October 25, 2022