High Efficiency
High efficiency in various computational domains is a central research theme, aiming to minimize resource consumption (time, memory, energy) while maintaining or improving performance. Current efforts focus on developing novel algorithms and architectures, such as optimized Thompson sampling for reinforcement learning, sparse attention mechanisms for transformers, and efficient model compression techniques, to achieve this goal across diverse applications including natural language processing, computer vision, and robotics. These advancements are crucial for deploying complex AI models on resource-constrained devices and for accelerating scientific discovery in data-intensive fields.
Papers
Exploring the Dynamics of Lotka-Volterra Systems: Efficiency, Extinction Order, and Predictive Machine Learning
Sepideh Vafaie, Deepak Bal, Michael A.S. Thorne, Eric Forgoston
Enhancing JEPAs with Spatial Conditioning: Robust and Efficient Representation Learning
Etai Littwin, Vimal Thilak, Anand Gopalakrishnan
Towards LLM-guided Efficient and Interpretable Multi-linear Tensor Network Rank Selection
Giorgos Iacovides, Wuyang Zhou, Danilo Mandic
Ada-K Routing: Boosting the Efficiency of MoE-based LLMs
Tongtian Yue, Longteng Guo, Jie Cheng, Xuange Gao, Jing Liu
Free Video-LLM: Prompt-guided Visual Perception for Efficient Training-free Video LLMs
Kai Han, Jianyuan Guo, Yehui Tang, Wei He, Enhua Wu, Yunhe Wang
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates
Md Kowsher, Tara Esmaeilbeig, Chun-Nam Yu, Mojtaba Soltanalian, Niloofar Yousefi
Prompt Tuning for Audio Deepfake Detection: Computationally Efficient Test-time Domain Adaptation with Limited Target Dataset
Hideyuki Oiso, Yuto Matsunaga, Kazuya Kakizaki, Taiki Miyagawa
Towards Defining an Efficient and Expandable File Format for AI-Generated Contents
Yixin Gao, Runsen Feng, Xin Li, Weiping Li, Zhibo Chen
Expanding Search Space with Diverse Prompting Agents: An Efficient Sampling Approach for LLM Mathematical Reasoning
Gisang Lee, Sangwoo Park, Junyoung Park, Andrew Chung, Sieun Park, Yoonah Park, Byungju Kim, Min-gyu Cho
Exploring Natural Language-Based Strategies for Efficient Number Learning in Children through Reinforcement Learning
Tirthankar Mittra
Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis
Jinbin Bai, Tian Ye, Wei Chow, Enxin Song, Qing-Guo Chen, Xiangtai Li, Zhen Dong, Lei Zhu, Shuicheng Yan
HybridBooth: Hybrid Prompt Inversion for Efficient Subject-Driven Generation
Shanyan Guan, Yanhao Ge, Ying Tai, Jian Yang, Wei Li, Mingyu You
Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency
Florian Hahlbohm, Fabian Friederichs, Tim Weyrich, Linus Franke, Moritz Kappel, Susana Castillo, Marc Stamminger, Martin Eisemann, Marcus Magnor
Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System
Weize Chen, Jiarui Yuan, Chen Qian, Cheng Yang, Zhiyuan Liu, Maosong Sun