Inference Cost
Inference cost, the computational expense of running a machine learning model, is a critical concern, especially for large language models (LLMs) and other resource-intensive architectures. Current research focuses on reducing this cost through various techniques, including model compression (e.g., pruning, quantization, low-rank decomposition), efficient model architectures (e.g., Mixture-of-Experts, sparse networks), and optimized inference strategies (e.g., early exiting, cascading, and specialized prompt handling). Lowering inference costs is crucial for broader deployment of advanced AI models, enabling wider accessibility and reducing the environmental impact of AI computations.
Papers
StreamAdapter: Efficient Test Time Adaptation from Contextual Streams
Dilxat Muhtar, Yelong Shen, Yaming Yang, Xiaodong Liu, Yadong Lu, Jianfeng Liu, Yuefeng Zhan, Hao Sun, Weiwei Deng, Feng Sun, Xueliang Zhang, Jianfeng Gao, Weizhu Chen, Qi Zhang
Reducing Reasoning Costs - The Path of Optimization for Chain of Thought via Sparse Attention Mechanism
Libo Wang