Network Pruning
Network pruning aims to reduce the size and computational cost of deep neural networks (DNNs) without significant performance loss, primarily by removing less important weights or connections. Current research focuses on developing efficient pruning algorithms for large language models (LLMs), convolutional neural networks (CNNs), and spiking neural networks (SNNs), often employing techniques like structured or unstructured pruning, and incorporating optimization methods to improve accuracy and speed. These advancements are crucial for deploying large-scale DNNs on resource-constrained devices, improving energy efficiency, and accelerating inference times across various applications.
Papers
December 15, 2023
November 29, 2023
November 5, 2023
October 7, 2023
August 19, 2023
August 12, 2023
July 3, 2023
June 22, 2023
June 16, 2023
June 9, 2023
April 19, 2023
April 8, 2023
March 21, 2023
February 28, 2023
December 9, 2022
December 7, 2022
October 29, 2022
September 27, 2022