Unstructured Pruning
Unstructured pruning aims to improve the efficiency of deep learning models by removing less important individual parameters (weights) without significantly impacting performance. Current research focuses on applying this technique to large language models (LLMs), convolutional neural networks (CNNs), and vision transformers (ViTs), often employing algorithms based on magnitude, gradient information, or activation patterns to guide the pruning process. This work is significant because it can reduce the computational cost and memory footprint of these large models, enabling their deployment on resource-constrained devices and accelerating inference speed, particularly for applications in natural language processing and computer vision.
Papers
November 8, 2024
September 26, 2024
September 20, 2024
September 10, 2024
April 22, 2024
March 3, 2024
February 5, 2024
December 19, 2023
November 29, 2023
November 24, 2023
November 16, 2023
November 8, 2023
August 12, 2023
May 28, 2023
May 23, 2023
March 26, 2023
March 16, 2023
March 1, 2023
October 14, 2022