Speed Effect
Research on "speed effect" spans diverse fields, focusing on optimizing the speed of various computational processes while maintaining accuracy. Current efforts concentrate on improving the efficiency of deep learning models, particularly in areas like object pose estimation, graph neural networks, and language model evaluation, often employing techniques such as model pruning, knowledge distillation, and parallel processing. These advancements have significant implications for real-world applications, ranging from robotics and autonomous driving to medical device development and brain-computer interfaces, by enabling faster and more efficient processing of complex data. The ultimate goal is to achieve a balance between speed and accuracy, leading to more practical and impactful applications across various domains.
Papers
Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs
Chenxi Sun, Hongzhi Zhang, Zijia Lin, Jingyuan Zhang, Fuzheng Zhang, Zhongyuan Wang, Bin Chen, Chengru Song, Di Zhang, Kun Gai, Deyi Xiong
HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting
Yuanhao Cai, Zihao Xiao, Yixun Liang, Minghan Qin, Yulun Zhang, Xiaokang Yang, Yaoyao Liu, Alan Yuille