Sparse Subnetworks
Sparse subnetworks research focuses on identifying and training smaller, more efficient neural network subsets that retain the performance of their larger, denser counterparts. Current efforts concentrate on developing algorithms to discover these optimal subnetworks within various architectures, including convolutional neural networks and transformers, often leveraging techniques like iterative pruning and weight masking. This research is significant because it promises to improve the efficiency and reduce the computational cost of deep learning models, leading to more sustainable and accessible AI applications.
Papers
September 5, 2024
August 21, 2024
July 4, 2024
May 31, 2024
May 15, 2024
April 29, 2024
April 6, 2024
March 20, 2024
March 13, 2024
March 7, 2024
February 2, 2024
November 16, 2023
October 4, 2023
September 29, 2023
September 19, 2023
May 26, 2023
March 21, 2023
February 10, 2023