Sparse DNN

Sparse deep neural networks (DNNs) aim to reduce the computational cost and memory footprint of DNNs by strategically removing less important connections or nodes while maintaining accuracy. Current research focuses on developing efficient training algorithms, hardware-accelerated architectures tailored to sparse computations (including structured sparsity patterns like N:M sparsity), and novel model architectures like message-passing networks for geometric data. These advancements are significant because they enable the deployment of larger and more complex DNNs on resource-constrained devices and improve the energy efficiency of deep learning applications across various domains.

Papers