Linear Compression
Linear compression techniques aim to reduce the size of data or models while minimizing information loss, crucial for efficient storage, transmission, and processing, especially with the rise of large language models and high-resolution data. Current research focuses on adapting and developing compression methods for various model architectures, including transformers and neural radiance fields, employing techniques like low-rank approximation, quantization, pruning, and hierarchical clustering. These advancements are significant for improving the efficiency and scalability of machine learning applications across diverse domains, from natural language processing and image compression to federated learning and earth observation.
Papers
June 5, 2023
May 30, 2023
May 26, 2023
May 25, 2023
May 20, 2023
May 18, 2023
May 17, 2023
May 9, 2023
April 20, 2023
April 6, 2023
April 4, 2023
March 17, 2023
February 20, 2023
February 15, 2023
January 31, 2023
January 30, 2023
January 24, 2023
January 18, 2023