Sparse Subnetworks
Sparse subnetworks research focuses on identifying and training smaller, more efficient neural network subsets that retain the performance of their larger, denser counterparts. Current efforts concentrate on developing algorithms to discover these optimal subnetworks within various architectures, including convolutional neural networks and transformers, often leveraging techniques like iterative pruning and weight masking. This research is significant because it promises to improve the efficiency and reduce the computational cost of deep learning models, leading to more sustainable and accessible AI applications.
Papers
November 28, 2022
October 25, 2022
October 23, 2022
October 20, 2022
October 11, 2022
October 1, 2022
August 23, 2022
July 9, 2022
July 1, 2022
June 16, 2022
June 14, 2022
June 1, 2022
May 31, 2022
May 30, 2022
April 15, 2022
March 9, 2022
March 7, 2022
February 24, 2022
February 23, 2022