High Efficiency
High efficiency in various computational domains is a central research theme, aiming to minimize resource consumption (time, memory, energy) while maintaining or improving performance. Current efforts focus on developing novel algorithms and architectures, such as optimized Thompson sampling for reinforcement learning, sparse attention mechanisms for transformers, and efficient model compression techniques, to achieve this goal across diverse applications including natural language processing, computer vision, and robotics. These advancements are crucial for deploying complex AI models on resource-constrained devices and for accelerating scientific discovery in data-intensive fields.
Papers
From Two-Stream to One-Stream: Efficient RGB-T Tracking via Mutual Prompt Learning and Knowledge Distillation
Yang Luo, Xiqing Guo, Hao Li
DBPF: A Framework for Efficient and Robust Dynamic Bin-Picking
Yichuan Li, Junkai Zhao, Yixiao Li, Zheng Wu, Rui Cao, Masayoshi Tomizuka, Yunhui Liu
VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting
Yujin Tang, Peijie Dong, Zhenheng Tang, Xiaowen Chu, Junwei Liang
Elite360D: Towards Efficient 360 Depth Estimation via Semantic- and Distance-Aware Bi-Projection Fusion
Hao Ai, Lin Wang
Balancing Fairness and Efficiency in Energy Resource Allocations
Jiayi Li, Matthew Motoki, Baosen Zhang
LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models
Yuzhang Shang, Mu Cai, Bingxin Xu, Yong Jae Lee, Yan Yan
Towards a Comprehensive, Efficient and Promptable Anatomic Structure Segmentation Model using 3D Whole-body CT Scans
Heng Guo, Jianfeng Zhang, Jiaxing Huang, Tony C. W. Mok, Dazhou Guo, Ke Yan, Le Lu, Dakai Jin, Minfeng Xu
LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin Xia, Xufang Luo, Jue Zhang, Qingwei Lin, Victor Rühle, Yuqing Yang, Chin-Yew Lin, H. Vicky Zhao, Lili Qiu, Dongmei Zhang
Jetfire: Efficient and Accurate Transformer Pretraining with INT8 Data Flow and Per-Block Quantization
Haocheng Xi, Yuxiang Chen, Kang Zhao, Kaijun Zheng, Jianfei Chen, Jun Zhu
Motion Mamba: Efficient and Long Sequence Motion Generation with Hierarchical and Bidirectional Selective SSM
Zeyu Zhang, Akide Liu, Ian Reid, Richard Hartley, Bohan Zhuang, Hao Tang
Time-Efficient and Identity-Consistent Virtual Try-On Using A Variant of Altered Diffusion Models
Phuong Dam, Jihoon Jeong, Anh Tran, Daeyoung Kim