Sparse Approximation
Sparse approximation focuses on representing data using a minimal number of non-zero components, aiming to reduce computational complexity and improve efficiency in various applications. Current research emphasizes developing faster algorithms like improved Orthogonal Matching Pursuit and leveraging sparse structures within models such as Gaussian Processes and neural networks (including Binary Neural Networks and Vision Transformers) for enhanced scalability and accuracy. This field is crucial for addressing computational bottlenecks in machine learning, signal processing, and scientific computing, enabling the analysis of high-dimensional data and the deployment of complex models on resource-constrained devices.
Papers
May 27, 2024
April 4, 2024
March 29, 2024
February 20, 2024
January 20, 2024
November 28, 2023
October 21, 2023
August 25, 2023
July 28, 2023
June 20, 2023
June 12, 2023
April 25, 2023
November 9, 2022
November 1, 2022
August 23, 2022
July 30, 2022
April 10, 2022
March 18, 2022
February 8, 2022