Tensor Decomposition Method
Tensor decomposition methods aim to represent high-dimensional data (tensors) as a product of lower-dimensional matrices, achieving data compression and efficient computation. Current research focuses on adapting these methods for various applications, including compressing neural networks (e.g., using Kronecker or Tucker decompositions), improving time-series classification, and enhancing key-value caching in large language models. These advancements are significant because they enable efficient processing of massive datasets and deployment of complex models on resource-constrained devices, impacting fields like machine learning, natural language processing, and computer vision.
Papers
November 10, 2024
October 3, 2024
September 23, 2024
August 29, 2024
May 21, 2024
April 14, 2024
January 29, 2024
January 6, 2024
September 18, 2023
September 7, 2023
July 13, 2023
May 9, 2023
April 26, 2023
February 10, 2023
October 12, 2022
October 7, 2022
July 17, 2022