High Efficiency
High efficiency in various computational domains is a central research theme, aiming to minimize resource consumption (time, memory, energy) while maintaining or improving performance. Current efforts focus on developing novel algorithms and architectures, such as optimized Thompson sampling for reinforcement learning, sparse attention mechanisms for transformers, and efficient model compression techniques, to achieve this goal across diverse applications including natural language processing, computer vision, and robotics. These advancements are crucial for deploying complex AI models on resource-constrained devices and for accelerating scientific discovery in data-intensive fields.
Papers
SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks
Rainer Engelken
Efficient Physics-Based Learned Reconstruction Methods for Real-Time 3D Near-Field MIMO Radar Imaging
Irfan Manisali, Okyanus Oral, Figen S. Oktem
Efficient High-Quality Clustering for Large Bipartite Graphs
Renchi Yang, Jieming Shi
Block Pruning for Enhanced Efficiency in Convolutional Neural Networks
Cheng-En Wu, Azadeh Davoodi, Yu Hen Hu
EAGLES: Efficient Accelerated 3D Gaussians with Lightweight EncodingS
Sharath Girish, Kamal Gupta, Abhinav Shrivastava
Natural-language-driven Simulation Benchmark and Copilot for Efficient Production of Object Interactions in Virtual Road Scenes
Kairui Yang, Zihao Guo, Gengjie Lin, Haotian Dong, Die Zuo, Jibin Peng, Zhao Huang, Zhecheng Xu, Fupeng Li, Ziyun Bai, Di Lin