Memory Efficient Training
Memory-efficient training focuses on reducing the substantial memory demands of training large neural networks, particularly large language models (LLMs) and transformers, while maintaining or improving performance. Current research explores diverse techniques, including optimizing mini-batch selection, employing low-rank approximations (like LoRA), utilizing reversible architectures, and developing novel quantization and pruning methods to compress activations and gradients. These advancements are crucial for democratizing access to powerful AI models by enabling training on more readily available hardware and reducing the environmental impact of computationally intensive training processes.
Papers
October 28, 2024
July 28, 2024
July 1, 2024
May 28, 2024
May 26, 2024
May 24, 2024
May 23, 2024
May 20, 2024
October 30, 2023
October 15, 2023
August 8, 2023
June 16, 2023
June 15, 2023
June 6, 2023
May 26, 2023
March 8, 2023
November 29, 2022
August 5, 2022
June 23, 2022