Many Sparse
Many Sparse research focuses on developing efficient methods for handling sparse data and models, primarily aiming to reduce computational costs and memory consumption while maintaining or improving performance. Current efforts concentrate on sparse neural network architectures (including Mixture-of-Experts models and various pruning techniques), sparse attention mechanisms in transformers, and sparse representations for various data types (e.g., point clouds, images). This work is significant for advancing machine learning applications in resource-constrained environments and enabling the scaling of large models to previously intractable sizes and complexities.
Papers
I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images
Jiwan Hur, Jae Young Lee, Jaehyun Choi, Junmo Kim
Improving Target Speaker Extraction with Sparse LDA-transformed Speaker Embeddings
Kai Liu, Xucheng Wan, Ziqing Du, Huan Zhou
Swarm-SLAM : Sparse Decentralized Collaborative Simultaneous Localization and Mapping Framework for Multi-Robot Systems
Pierre-Yves Lajoie, Giovanni Beltrame