Sparse Network Training

Sparse network training aims to develop efficient deep learning models by reducing the number of parameters while maintaining or improving performance. Current research focuses on methods that encourage sparsity during training, including techniques that leverage inter-layer feature similarity and those that generate and combine multiple sparse subnetworks. This area is significant because it promises to reduce computational costs, energy consumption, and memory requirements for deep learning, making it more accessible and environmentally friendly while potentially improving model generalization.

Papers