Many Sparse
Many Sparse research focuses on developing efficient methods for handling sparse data and models, primarily aiming to reduce computational costs and memory consumption while maintaining or improving performance. Current efforts concentrate on sparse neural network architectures (including Mixture-of-Experts models and various pruning techniques), sparse attention mechanisms in transformers, and sparse representations for various data types (e.g., point clouds, images). This work is significant for advancing machine learning applications in resource-constrained environments and enabling the scaling of large models to previously intractable sizes and complexities.
219papers
Papers - Page 3
January 26, 2025
January 24, 2025
January 20, 2025
January 14, 2025
January 6, 2025
Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective
Zhongjian Zhang, Mengmei Zhang, Xiao Wang, Lingjuan Lyu, Bo Yan, Junping Du, Chuan ShiFrom Dense to Sparse: Event Response for Enhanced Residential Load Forecasting
Xin Cao, Qinghua Tao, Yingjie Zhou, Lu Zhang, Le Zhang, Dongjin Song, Dapeng Oliver Wu, Ce Zhu
January 2, 2025
Operator Learning for Reconstructing Flow Fields from Sparse Measurements: an Energy Transformer Approach
Qian Zhang, Dmitry Krotov, George Em KarniadakisSparis: Neural Implicit Surface Reconstruction of Indoor Scenes from Sparse Views
Yulun Wu, Han Huang, Wenyuan Zhang, Chao Deng, Ge Gao, Ming Gu, Yu-Shen Liu
December 23, 2024
December 19, 2024
DCL-Sparse: Distributed Range-only Cooperative Localization of Multi-Robots in Noisy and Sparse Sensing Graphs
Atharva Sagale, Tohid Kargar Tasooji, Ramviyas ParasuramanAgent-Temporal Credit Assignment for Optimal Policy Preservation in Sparse Multi-Agent Reinforcement Learning
Aditya Kapoor, Sushant Swamy, Kale-ab Tessera, Mayank Baranwal, Mingfei Sun, Harshad Khadilkar, Stefano V. Albrecht
December 17, 2024
December 16, 2024