Tensor Approximation
Tensor approximation focuses on representing high-dimensional data (tensors) using simpler, lower-dimensional structures to reduce computational cost and memory usage while preserving essential information. Current research emphasizes efficient algorithms like CP decomposition and Tucker decomposition, along with novel approaches such as signed cut decompositions and tensor ring decompositions, applied to various model architectures including Vision Transformers and large language models. These advancements are crucial for deploying large-scale machine learning models on resource-constrained devices and improving the efficiency of various applications, from image processing and natural language processing to reinforcement learning.
Papers
October 2, 2024
June 5, 2024
May 27, 2024
February 8, 2024
February 5, 2024
August 8, 2023
June 19, 2023
May 31, 2023
May 19, 2023
April 26, 2023