Edge Pruning
Edge pruning is a neural network compression technique aiming to reduce computational costs and memory usage by removing less important connections or parameters without significant performance degradation. Current research focuses on developing efficient pruning algorithms for various architectures, including convolutional neural networks (CNNs), vision transformers (ViTs), and large language models (LLMs), often incorporating techniques like knowledge distillation and optimization-based methods to improve performance after pruning. This work is significant because it enables the deployment of large, powerful models on resource-constrained devices and improves the energy efficiency of training and inference, impacting both scientific understanding of model redundancy and practical applications across diverse fields.
Papers
LoRAPrune: Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning
Mingyang Zhang, Hao Chen, Chunhua Shen, Zhen Yang, Linlin Ou, Xinyi Yu, Bohan Zhuang
Neural Sculpting: Uncovering hierarchically modular task structure in neural networks through pruning and network analysis
Shreyas Malakarjun Patil, Loizos Michael, Constantine Dovrolis
DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech Models
Yifan Peng, Yui Sudo, Shakeel Muhammad, Shinji Watanabe