Many Sparse
Many Sparse research focuses on developing efficient methods for handling sparse data and models, primarily aiming to reduce computational costs and memory consumption while maintaining or improving performance. Current efforts concentrate on sparse neural network architectures (including Mixture-of-Experts models and various pruning techniques), sparse attention mechanisms in transformers, and sparse representations for various data types (e.g., point clouds, images). This work is significant for advancing machine learning applications in resource-constrained environments and enabling the scaling of large models to previously intractable sizes and complexities.
Papers
March 13, 2024
March 7, 2024
February 29, 2024
February 21, 2024
February 19, 2024
February 15, 2024
February 12, 2024
February 4, 2024
February 2, 2024
January 31, 2024
January 26, 2024
January 25, 2024
December 22, 2023
December 19, 2023
December 17, 2023
December 14, 2023
December 13, 2023