Quantization Learning
Quantization learning focuses on representing data using fewer bits, thereby reducing computational costs and memory requirements for machine learning models, particularly in resource-constrained environments. Current research explores various quantization techniques, including asymmetric quantization with learnable parameters, and their integration with different model architectures like transformers and convolutional neural networks, often within self-supervised or federated learning frameworks. This research is significant for improving the efficiency and scalability of machine learning across diverse applications, from image compression and retrieval to edge computing and personalized federated learning.
Papers
October 1, 2024
April 25, 2024
January 27, 2024
December 20, 2023
May 30, 2023
August 10, 2022
July 13, 2022
July 12, 2022
June 20, 2022
June 15, 2022
February 15, 2022