High Efficiency
High efficiency in various computational domains is a central research theme, aiming to minimize resource consumption (time, memory, energy) while maintaining or improving performance. Current efforts focus on developing novel algorithms and architectures, such as optimized Thompson sampling for reinforcement learning, sparse attention mechanisms for transformers, and efficient model compression techniques, to achieve this goal across diverse applications including natural language processing, computer vision, and robotics. These advancements are crucial for deploying complex AI models on resource-constrained devices and for accelerating scientific discovery in data-intensive fields.
Papers
Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Deepika Bablani, Jeffrey L. Mckinstry, Steven K. Esser, Rathinakumar Appuswamy, Dharmendra S. Modha
Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics
Brendan Lucier, Sarath Pattathil, Aleksandrs Slivkins, Mengxiao Zhang
OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks
Xingwu Guo, Ziwei Zhou, Yueling Zhang, Guy Katz, Min Zhang
Bi-AM-RRT*: A Fast and Efficient Sampling-Based Motion Planning Algorithm in Dynamic Environments
Ying Zhang, Heyong Wang, Maoliang Yin, Jiankun Wang, Changchun Hua
Efficient Preference-Based Reinforcement Learning Using Learned Dynamics Models
Yi Liu, Gaurav Datta, Ellen Novoseller, Daniel S. Brown
Word-Graph2vec: An efficient word embedding approach on word co-occurrence graph using random walk technique
Wenting Li, Jiahong Xue, Xi Zhang, Huacan Chen, Zeyu Chen, Feijuan Huang, Yuanzhe Cai