Sparse Neural Network
Sparse neural networks (SNNs) aim to improve the efficiency and interpretability of deep learning models by reducing the number of parameters while maintaining or even exceeding the performance of their dense counterparts. Current research focuses on developing novel pruning algorithms, exploring the interplay between data and model architecture in achieving sparsity, and investigating the impact of sparsity on training dynamics and generalization. This area is significant because SNNs offer the potential for reduced computational costs, improved energy efficiency, and enhanced model interpretability, leading to wider deployment of deep learning in resource-constrained environments and applications requiring explainability.
Papers
March 14, 2023
March 10, 2023
March 3, 2023
February 6, 2023
January 28, 2023
January 20, 2023
January 2, 2023
December 25, 2022
October 25, 2022
September 13, 2022
July 14, 2022
July 9, 2022
July 3, 2022
July 2, 2022
June 28, 2022
June 26, 2022
June 21, 2022
June 18, 2022
June 17, 2022
June 2, 2022