Real Power
Real power in artificial intelligence research currently centers on understanding and leveraging the capabilities of large language models (LLMs) for various tasks, moving beyond traditional fine-tuning methods towards more efficient approaches like in-context learning. Research focuses on improving LLMs' performance through techniques such as self-prompting, exploring novel architectures like autoregressive decision trees and incorporating external knowledge sources to enhance reasoning and reduce hallucinations. These advancements have significant implications for diverse fields, including natural language processing, computer vision, and scientific discovery, by enabling more efficient and effective solutions to complex problems.
Papers
Harnessing the Power of Neural Operators with Automatically Encoded Conservation Laws
Ning Liu, Yiming Fan, Xianyi Zeng, Milan Klöwer, Lu Zhang, Yue Yu
Unleashing the Power of CNN and Transformer for Balanced RGB-Event Video Recognition
Xiao Wang, Yao Rong, Shiao Wang, Yuan Chen, Zhe Wu, Bo Jiang, Yonghong Tian, Jin Tang
Learning in Online Principal-Agent Interactions: The Power of Menus
Minbiao Han, Michael Albert, Haifeng Xu
Exact Algorithms and Lowerbounds for Multiagent Pathfinding: Power of Treelike Topology
Foivos Fioravantes, Dušan Knop, Jan Matyáš Křišťan, Nikolaos Melissinos, Michal Opler
Riveter: Measuring Power and Social Dynamics Between Entities
Maria Antoniak, Anjalie Field, Jimin Mun, Melanie Walsh, Lauren F. Klein, Maarten Sap
DocPedia: Unleashing the Power of Large Multimodal Model in the Frequency Domain for Versatile Document Understanding
Hao Feng, Qi Liu, Hao Liu, Wengang Zhou, Houqiang Li, Can Huang
Unveiling the Power of Self-Attention for Shipping Cost Prediction: The Rate Card Transformer
P Aditya Sreekar, Sahil Verma, Varun Madhavan, Abhishek Persad