Real Power
Real power in artificial intelligence research currently centers on understanding and leveraging the capabilities of large language models (LLMs) for various tasks, moving beyond traditional fine-tuning methods towards more efficient approaches like in-context learning. Research focuses on improving LLMs' performance through techniques such as self-prompting, exploring novel architectures like autoregressive decision trees and incorporating external knowledge sources to enhance reasoning and reduce hallucinations. These advancements have significant implications for diverse fields, including natural language processing, computer vision, and scientific discovery, by enabling more efficient and effective solutions to complex problems.
Papers
MegActor: Harness the Power of Raw Video for Vivid Portrait Animation
Shurong Yang, Huadong Li, Juhao Wu, Minhao Jing, Linze Li, Renhe Ji, Jiajun Liang, Haoqiang Fan
Power of Cooperative Supervision: Multiple Teachers Framework for Enhanced 3D Semi-Supervised Object Detection
Jin-Hee Lee, Jae-Keun Lee, Je-Seok Kim, Soon Kwon
Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation
Jingchang Chen, Hongxuan Tang, Zheng Chu, Qianglong Chen, Zekun Wang, Ming Liu, Bing Qin
Unlocking the Power of Spatial and Temporal Information in Medical Multimodal Pre-training
Jinxia Yang, Bing Su, Wayne Xin Zhao, Ji-Rong Wen
Unleashing the Power of Unlabeled Data: A Self-supervised Learning Framework for Cyber Attack Detection in Smart Grids
Hanyu Zeng, Pengfei Zhou, Xin Lou, Zhen Wei Ng, David K. Y. Yau, Marianne Winslett
Qualitative and quantitative analysis of student's perceptions in the use of generative AI in educational environments
Sergio Altares-López, José M. Bengochea-Guevara, Carlos Ranz, Héctor Montes, Angela Ribeiro
Harnessing the power of longitudinal medical imaging for eye disease prognosis using Transformer-based sequence modeling
Gregory Holste, Mingquan Lin, Ruiwen Zhou, Fei Wang, Lei Liu, Qi Yan, Sarah H. Van Tassel, Kyle Kovacs, Emily Y. Chew, Zhiyong Lu, Zhangyang Wang, Yifan Peng
Power of $\ell_1$-Norm Regularized Kaczmarz Algorithms for High-Order Tensor Recovery
Katherine Henneberger, Jing Qin