High Efficiency
High efficiency in various computational domains is a central research theme, aiming to minimize resource consumption (time, memory, energy) while maintaining or improving performance. Current efforts focus on developing novel algorithms and architectures, such as optimized Thompson sampling for reinforcement learning, sparse attention mechanisms for transformers, and efficient model compression techniques, to achieve this goal across diverse applications including natural language processing, computer vision, and robotics. These advancements are crucial for deploying complex AI models on resource-constrained devices and for accelerating scientific discovery in data-intensive fields.
Papers
Efficient and Low-Footprint Object Classification using Spatial Contrast
Matthew Belding, Daniel C. Stumpp, Rajkumar Kubendran
A Simple yet Efficient Ensemble Approach for AI-generated Text Detection
Harika Abburi, Kalyani Roy, Michael Suesserman, Nirmala Pudota, Balaji Veeramani, Edward Bowen, Sanmitra Bhattacharya
Efficient, Self-Supervised Human Pose Estimation with Inductive Prior Tuning
Nobline Yoo, Olga Russakovsky
Efficient Human-AI Coordination via Preparatory Language-based Convention
Cong Guan, Lichao Zhang, Chunpeng Fan, Yichen Li, Feng Chen, Lihe Li, Yunjia Tian, Lei Yuan, Yang Yu
AdaSent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot Classification
Yongxin Huang, Kexin Wang, Sourav Dutta, Raj Nath Patel, Goran Glavaš, Iryna Gurevych
Atom: Low-bit Quantization for Efficient and Accurate LLM Serving
Yilong Zhao, Chien-Yu Lin, Kan Zhu, Zihao Ye, Lequn Chen, Size Zheng, Luis Ceze, Arvind Krishnamurthy, Tianqi Chen, Baris Kasikci
D2NO: Efficient Handling of Heterogeneous Input Function Spaces with Distributed Deep Neural Operators
Zecheng Zhang, Christian Moya, Lu Lu, Guang Lin, Hayden Schaeffer
Bayes beats Cross Validation: Efficient and Accurate Ridge Regression via Expectation Maximization
Shu Yu Tew, Mario Boley, Daniel F. Schmidt
SiDA-MoE: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models
Zhixu Du, Shiyu Li, Yuhao Wu, Xiangyu Jiang, Jingwei Sun, Qilin Zheng, Yongkai Wu, Ang Li, Hai "Helen" Li, Yiran Chen