Real Power
Real power in artificial intelligence research currently centers on understanding and leveraging the capabilities of large language models (LLMs) for various tasks, moving beyond traditional fine-tuning methods towards more efficient approaches like in-context learning. Research focuses on improving LLMs' performance through techniques such as self-prompting, exploring novel architectures like autoregressive decision trees and incorporating external knowledge sources to enhance reasoning and reduce hallucinations. These advancements have significant implications for diverse fields, including natural language processing, computer vision, and scientific discovery, by enabling more efficient and effective solutions to complex problems.
Papers
The Power of Adaptation: Boosting In-Context Learning through Adaptive Prompting
Shuzhang Cai, Twumasi Mensah-Boateng, Xander Kuksov, Jing Yuan, Shaojie Tang
Power- and Fragmentation-aware Online Scheduling for GPU Datacenters
Francesco Lettich, Emanuele Carlini, Franco Maria Nardini, Raffaele Perego, Salvatore Trani
On the Power and Limitations of Examples for Description Logic Concepts
Balder ten Cate, Raoul Koudijs, Ana Ozaki
HoVLE: Unleashing the Power of Monolithic Vision-Language Models with Holistic Vision-Language Embedding
Chenxin Tao, Shiqian Su, Xizhou Zhu, Chenyu Zhang, Zhe Chen, Jiawen Liu, Wenhai Wang, Lewei Lu, Gao Huang, Yu Qiao, Jifeng Dai
Allocation for Omnidirectional Aerial Robots: Incorporating Power Dynamics
Eugenio Cuniato, Mike Allenspach, Thomas Stastny, Helen Oleynikova, Roland Siegwart, Michael Pantic
Unleashing the Power of Continual Learning on Non-Centralized Devices: A Survey
Yichen Li, Haozhao Wang, Wenchao Xu, Tianzhe Xiao, Hong Liu, Minzhu Tu, Yuying Wang, Xin Yang, Rui Zhang, Shui Yu, Song Guo, Ruixuan Li
Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN
Pengxiang Li, Lu Yin, Shiwei Liu