Real Power
Real power in artificial intelligence research currently centers on understanding and leveraging the capabilities of large language models (LLMs) for various tasks, moving beyond traditional fine-tuning methods towards more efficient approaches like in-context learning. Research focuses on improving LLMs' performance through techniques such as self-prompting, exploring novel architectures like autoregressive decision trees and incorporating external knowledge sources to enhance reasoning and reduce hallucinations. These advancements have significant implications for diverse fields, including natural language processing, computer vision, and scientific discovery, by enabling more efficient and effective solutions to complex problems.
291papers
Papers
March 18, 2025
The Power of Context: How Multimodality Improves Image Super-Resolution
Kangfu Mei, Hossein Talebi, Mojtaba Ardakani, Vishal M. Patel, Peyman Milanfar, Mauricio DelbracioGoogle●Johns Hopkins UniversityWasserstein-based Kernels for Clustering: Application to Power Distribution Graphs
Alfredo Oneto, Blazhe Gjorgiev, Giovanni SansaviniETH Zurich
March 13, 2025
The Power of One: A Single Example is All it Takes for Segmentation in VLMs
Mir Rayat Imtiaz Hossain, Mennatullah Siam, Leonid Sigal, James J. LittleUniversity of British Columbia●Vector Institute for AI●Canada CIFAR AI ChairUnlock the Power of Unlabeled Data in Language Driving Model
Chaoqun Wang, Jie Yang, Xiaobin Hong, Ruimao ZhangShenzhen●NanJing University
March 9, 2025
Generalizable Machine Learning Models for Predicting Data Center Server Power, Efficiency, and Throughput
Nuoa Lei, Arman Shehabi, Jun Lu, Zhi Cao, Jonathan Koomey, Sarah Smith, Eric MasanetLawrence Berkeley National Laboratory●University of Illinois●Nankai University●Koomey Analytics●University of California Santa BarbaraGeneral Scales Unlock AI Evaluation with Explanatory and Predictive Power
Lexin Zhou, Lorenzo Pacchiardi, Fernando Martínez-Plumed, Katherine M. Collins, Yael Moros-Daval, Seraphina Zhang, Qinlin Zhao, Yitian Huang+18University of Cambridge●Microsoft Research Asia●Universitat Politècnica de València●University of Cambridge●University of Cambridge●KU...+6
March 7, 2025
Impoola: The Power of Average Pooling for Image-Based Deep Reinforcement Learning
Raphael Trumpp, Ansgar Schäfftlein, Mirco Theile, Marco CaccamoTUM School of Engineering and DesignPi-GPS: Enhancing Geometry Problem Solving by Unleashing the Power of Diagrammatic Information
Junbo Zhao, Ting Zhang, Jiayu Sun, Mi Tian, Hua HuangBeijing Normal University●TAL
March 5, 2025
Small but Mighty: Enhancing Time Series Forecasting with Lightweight LLMs
Haoran Fan, Bin Li, Yixuan Weng, Shoujun ZhouChongqing University of Posts and Telecommunications●Chinese Academy of Sciences●Westlake UniversityPowerAttention: Exponentially Scaling of Receptive Fields for Effective Sparse Attention
Lida Chen, Dong Xu, Chenxin An, Xintao Wang, Yikai Zhang, Jiangjie Chen, Zujie Liang, Feng Wei, Jiaqing Liang, Yanghua Xiao, Wei WangFudan University●The University of Hong Kong●ByteDance Seed●Ant Group
March 3, 2025
February 24, 2025
The Power of Graph Signal Processing for Chip Placement Acceleration
Yiting Liu, Hai Zhou, Jia Wang, Fan Yang, Xuan Zeng, Li ShangFudan University●Northwestern University●IIIinois Institute of TechnologyHow Do Large Language Monkeys Get Their Power (Laws)?
Rylan Schaeffer, Joshua Kazdan, John Hughes, Jordan Juravsky, Sara Price, Aengus Lynch, Erik Jones, Robert Kirk, Azalia Mirhoseini, Sanmi Koyejo