Compressed Model
Compressed models aim to reduce the size and computational cost of large machine learning models, particularly deep learning models and large language models (LLMs), while preserving performance. Current research focuses on developing novel compression techniques, including pruning, quantization, low-rank decomposition, and the use of transformers and autoencoders, often tailored to specific applications or model architectures. These advancements are crucial for deploying sophisticated models on resource-constrained devices and improving the efficiency and sustainability of AI systems, impacting various fields from image processing and natural language processing to medical imaging and scientific computing.
Papers
October 28, 2024
September 9, 2024
August 16, 2024
July 12, 2024
June 18, 2024
May 29, 2024
May 9, 2024
April 15, 2024
March 20, 2024
February 13, 2024
December 10, 2023
December 1, 2023
November 23, 2023
September 25, 2023
September 13, 2023
July 12, 2023
June 27, 2023
April 13, 2023
March 14, 2023