Linear Compression
Linear compression techniques aim to reduce the size of data or models while minimizing information loss, crucial for efficient storage, transmission, and processing, especially with the rise of large language models and high-resolution data. Current research focuses on adapting and developing compression methods for various model architectures, including transformers and neural radiance fields, employing techniques like low-rank approximation, quantization, pruning, and hierarchical clustering. These advancements are significant for improving the efficiency and scalability of machine learning applications across diverse domains, from natural language processing and image compression to federated learning and earth observation.
Papers
November 23, 2022
November 15, 2022
November 10, 2022
November 4, 2022
November 2, 2022
October 31, 2022
October 18, 2022
October 11, 2022
August 29, 2022
August 24, 2022
August 15, 2022
August 12, 2022
August 11, 2022
August 10, 2022
August 4, 2022
July 20, 2022
July 1, 2022