Real Power
Real power in artificial intelligence research currently centers on understanding and leveraging the capabilities of large language models (LLMs) for various tasks, moving beyond traditional fine-tuning methods towards more efficient approaches like in-context learning. Research focuses on improving LLMs' performance through techniques such as self-prompting, exploring novel architectures like autoregressive decision trees and incorporating external knowledge sources to enhance reasoning and reduce hallucinations. These advancements have significant implications for diverse fields, including natural language processing, computer vision, and scientific discovery, by enabling more efficient and effective solutions to complex problems.
Papers
The Power of Resets in Online Reinforcement Learning
Zakaria Mhammedi, Dylan J. Foster, Alexander Rakhlin
XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts
Yifeng Ding, Jiawei Liu, Yuxiang Wei, Terry Yue Zhuo, Lingming Zhang
Unsupervised End-to-End Task-Oriented Dialogue with LLMs: The Power of the Noisy Channel
Brendan King, Jeffrey Flanigan
Exploring and Unleashing the Power of Large Language Models in Automated Code Translation
Zhen Yang, Fang Liu, Zhongxing Yu, Jacky Wai Keung, Jia Li, Shuo Liu, Yifan Hong, Xiaoxue Ma, Zhi Jin, Ge Li
On the Power of Interactive Proofs for Learning
Tom Gur, Mohammad Mahdi Jahanara, Mohammad Mahdi Khodabandeh, Ninad Rajgopal, Bahar Salamatian, Igor Shinkar
The Power of Properties: Uncovering the Influential Factors in Emotion Classification
Tim Büchner, Niklas Penzel, Orlando Guntinas-Lichius, Joachim Denzler