Sparsification Method
Sparsification methods aim to reduce the computational complexity and memory footprint of large models, such as deep neural networks and large language models, while preserving performance. Current research focuses on developing efficient sparsification algorithms, including those integrated into training processes (e.g., "always-sparse" training) and post-training techniques that remove redundant parameters or reduce dimensionality. These advancements are crucial for deploying large models on resource-constrained devices and improving the efficiency of machine learning in various applications, including federated learning and control systems.
Papers
September 8, 2024
July 25, 2024
May 13, 2024
January 26, 2024
January 18, 2024
January 12, 2024
December 20, 2023
October 3, 2023
March 15, 2023
May 31, 2022