Sparse Architecture
Sparse architecture in deep learning aims to reduce computational cost and memory usage by strategically eliminating redundant connections or activations within neural networks. Current research focuses on developing efficient sparse algorithms and architectures, including those based on probabilistic approximation, mixture-of-experts models, and structured sparse tensor decompositions, applied to various tasks like 3D object detection, autonomous driving, and large vision-language models. This research is significant because it enables the development of larger, more powerful models while maintaining or even improving performance and energy efficiency, paving the way for deploying advanced AI systems on resource-constrained devices.
Papers
October 10, 2024
September 11, 2024
August 29, 2024
July 24, 2024
June 27, 2024
April 10, 2024
March 12, 2024
January 29, 2024
June 27, 2023
May 24, 2023
May 4, 2023