Many Sparse
Many Sparse research focuses on developing efficient methods for handling sparse data and models, primarily aiming to reduce computational costs and memory consumption while maintaining or improving performance. Current efforts concentrate on sparse neural network architectures (including Mixture-of-Experts models and various pruning techniques), sparse attention mechanisms in transformers, and sparse representations for various data types (e.g., point clouds, images). This work is significant for advancing machine learning applications in resource-constrained environments and enabling the scaling of large models to previously intractable sizes and complexities.
Papers
Selectively Dilated Convolution for Accuracy-Preserving Sparse Pillar-based Embedded 3D Object Detection
Seongmin Park, Minjae Lee, Junwon Choi, Jungwook Choi
TranSplat: Generalizable 3D Gaussian Splatting from Sparse Multi-View Images with Transformers
Chuanrui Zhang, Yingshuang Zou, Zhuoling Li, Minmin Yi, Haoqian Wang
TrackNeRF: Bundle Adjusting NeRF from Sparse and Noisy Views via Feature Tracks
Jinjie Mai, Wenxuan Zhu, Sara Rojas, Jesus Zarzar, Abdullah Hamdi, Guocheng Qian, Bing Li, Silvio Giancola, Bernard Ghanem
Overcoming Growth-Induced Forgetting in Task-Agnostic Continual Learning
Yuqing Zhao, Divya Saxena, Jiannong Cao, Xiaoyun Liu, Changlin Song