Edge Pruning
Edge pruning is a neural network compression technique aiming to reduce computational costs and memory usage by removing less important connections or parameters without significant performance degradation. Current research focuses on developing efficient pruning algorithms for various architectures, including convolutional neural networks (CNNs), vision transformers (ViTs), and large language models (LLMs), often incorporating techniques like knowledge distillation and optimization-based methods to improve performance after pruning. This work is significant because it enables the deployment of large, powerful models on resource-constrained devices and improves the energy efficiency of training and inference, impacting both scientific understanding of model redundancy and practical applications across diverse fields.
Papers
Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based Pruning
Brian B. Moser, Federico Raue, Tobias C. Nauen, Stanislav Frolov, Andreas Dengel
Just Leaf It: Accelerating Diffusion Classifiers with Hierarchical Class Pruning
Arundhati S. Shanbhag, Brian B. Moser, Tobias C. Nauen, Stanislav Frolov, Federico Raue, Andreas Dengel
Self-calibration for Language Model Quantization and Pruning
Miles Williams, George Chrysostomou, Nikolaos Aletras
DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization
Haowei Zhu, Dehua Tang, Ji Liu, Mingjie Lu, Jintu Zheng, Jinzhang Peng, Dong Li, Yu Wang, Fan Jiang, Lu Tian, Spandan Tiwari, Ashish Sirasao, Jun-Hai Yong, Bin Wang, Emad Barsoum
Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning
Abhinav Bandari, Lu Yin, Cheng-Yu Hsieh, Ajay Kumar Jaiswal, Tianlong Chen, Li Shen, Ranjay Krishna, Shiwei Liu
S2HPruner: Soft-to-Hard Distillation Bridges the Discretization Gap in Pruning
Weihao Lin, Shengji Tang, Chong Yu, Peng Ye, Tao Chen
Enhancing Vision-Language Model Pre-training with Image-text Pair Pruning Based on Word Frequency
Mingliang Liang, Martha Larson