Inference Speed
Inference speed, the time taken for a machine learning model to process input and produce output, is a critical factor limiting the deployment of powerful models in resource-constrained environments and real-time applications. Current research focuses on optimizing various model architectures, including transformers and diffusion models, through techniques like knowledge distillation, model pruning, parallel decoding, and early exiting, aiming to significantly reduce latency without sacrificing accuracy. These advancements are crucial for expanding the practical applications of large language models, computer vision systems, and other computationally intensive AI systems across diverse platforms, from smartphones to embedded devices.
Papers
August 28, 2023
July 17, 2023
July 2, 2023
June 2, 2023
January 27, 2023
December 9, 2022
November 21, 2022
October 14, 2022
October 5, 2022
September 29, 2022
September 26, 2022
June 1, 2022
May 21, 2022
April 15, 2022
April 13, 2022
March 30, 2022
March 28, 2022
February 8, 2022