Linear Compression
Linear compression techniques aim to reduce the size of data or models while minimizing information loss, crucial for efficient storage, transmission, and processing, especially with the rise of large language models and high-resolution data. Current research focuses on adapting and developing compression methods for various model architectures, including transformers and neural radiance fields, employing techniques like low-rank approximation, quantization, pruning, and hierarchical clustering. These advancements are significant for improving the efficiency and scalability of machine learning applications across diverse domains, from natural language processing and image compression to federated learning and earth observation.
Papers
On the compression of shallow non-causal ASR models using knowledge distillation and tied-and-reduced decoder for low-latency on-device speech recognition
Nagaraj Adiga, Jinhwan Park, Chintigari Shiva Kumar, Shatrughan Singh, Kyungmin Lee, Chanwoo Kim, Dhananjaya Gowda
OTOv3: Automatic Architecture-Agnostic Neural Network Training and Compression from Structured Pruning to Erasing Operators
Tianyi Chen, Tianyu Ding, Zhihui Zhu, Zeyu Chen, HsiangTao Wu, Ilya Zharkov, Luming Liang
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization
Prateek Yadav, Leshem Choshen, Colin Raffel, Mohit Bansal
White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?
Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Hao Bai, Yuexiang Zhai, Benjamin D. Haeffele, Yi Ma