Sparse Code
Sparse coding aims to represent data using a minimal number of non-zero coefficients, leading to efficient and robust data representations. Current research focuses on developing improved algorithms for learning sparse codes, including variational autoencoders and iterative methods like ISTA, and applying these techniques to diverse areas such as video compression, object recognition, and quantum computing. The resulting sparse representations offer benefits in dimensionality reduction, improved model efficiency, and enhanced robustness to noise and distortions, impacting fields ranging from machine learning to signal processing.
Papers
June 9, 2024
March 23, 2024
November 3, 2023
October 29, 2023
March 29, 2023
March 24, 2023
December 28, 2022
July 7, 2022
June 10, 2022
May 7, 2022
April 15, 2022