Sparse Subnetworks
Sparse subnetworks research focuses on identifying and training smaller, more efficient neural network subsets that retain the performance of their larger, denser counterparts. Current efforts concentrate on developing algorithms to discover these optimal subnetworks within various architectures, including convolutional neural networks and transformers, often leveraging techniques like iterative pruning and weight masking. This research is significant because it promises to improve the efficiency and reduce the computational cost of deep learning models, leading to more sustainable and accessible AI applications.
Papers
January 30, 2022
January 8, 2022
December 18, 2021
December 13, 2021
November 23, 2021