Network Sparsity
Network sparsity focuses on reducing the number of connections or parameters in neural networks to improve efficiency and reduce computational costs without significant performance loss. Current research explores various pruning techniques, including weight, block, and unit pruning, often applied to convolutional neural networks and graph neural networks, with a focus on developing algorithms that achieve high sparsity ratios while maintaining accuracy. This area is significant because it addresses the computational demands of large-scale models, enabling deployment on resource-constrained devices and improving training speed, particularly relevant for applications in decentralized learning and edge computing.
Papers
July 4, 2024
May 29, 2024
May 2, 2024
March 20, 2024
March 12, 2024
February 29, 2024
December 11, 2023
August 17, 2023
July 14, 2023
February 9, 2023
May 18, 2022
March 27, 2022
January 30, 2022
November 19, 2021
November 3, 2021