High Efficiency
High efficiency in various computational domains is a central research theme, aiming to minimize resource consumption (time, memory, energy) while maintaining or improving performance. Current efforts focus on developing novel algorithms and architectures, such as optimized Thompson sampling for reinforcement learning, sparse attention mechanisms for transformers, and efficient model compression techniques, to achieve this goal across diverse applications including natural language processing, computer vision, and robotics. These advancements are crucial for deploying complex AI models on resource-constrained devices and for accelerating scientific discovery in data-intensive fields.
Papers
GRITv2: Efficient and Light-weight Social Relation Recognition
N K Sagar Reddy, Neeraj Kasera, Avinash Thakur
Efficient first-order algorithms for large-scale, non-smooth maximum entropy models with application to wildfire science
Gabriel P. Langlois, Jatan Buch, Jérôme Darbon
An Efficient Learning-based Solver Comparable to Metaheuristics for the Capacitated Arc Routing Problem
Runze Guo, Feng Xue, Anlong Ming, Nicu Sebe
Snapshot Reinforcement Learning: Leveraging Prior Trajectories for Efficiency
Yanxiao Zhao, Yangge Qian, Tianyi Wang, Jingyang Shan, Xiaolin Qin
Bias Mitigation in Fine-tuning Pre-trained Models for Enhanced Fairness and Efficiency
Yixuan Zhang, Feng Zhou
Fast and Efficient Local Search for Genetic Programming Based Loss Function Learning
Christian Raymond, Qi Chen, Bing Xue, Mengjie Zhang
Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models
Seungduk Kim, Seungtaek Choi, Myeongho Jeong
YOLO-TLA: An Efficient and Lightweight Small Object Detection Model based on YOLOv5
Peng Gao, Chun-Lin Ji, Tao Yu, Ru-Yue Yuan
Combining Constrained Diffusion Models and Numerical Solvers for Efficient and Robust Non-Convex Trajectory Optimization
Anjian Li, Zihan Ding, Adji Bousso Dieng, Ryne Beeson