Real Power
Real power in artificial intelligence research currently centers on understanding and leveraging the capabilities of large language models (LLMs) for various tasks, moving beyond traditional fine-tuning methods towards more efficient approaches like in-context learning. Research focuses on improving LLMs' performance through techniques such as self-prompting, exploring novel architectures like autoregressive decision trees and incorporating external knowledge sources to enhance reasoning and reduce hallucinations. These advancements have significant implications for diverse fields, including natural language processing, computer vision, and scientific discovery, by enabling more efficient and effective solutions to complex problems.
Papers
Prompt Me Up: Unleashing the Power of Alignments for Multimodal Entity and Relation Extraction
Xuming Hu, Junzhe Chen, Aiwei Liu, Shiao Meng, Lijie Wen, Philip S. Yu
From Pointwise to Powerhouse: Initialising Neural Networks with Generative Models
Christian Harder, Moritz Fuchs, Yuri Tolkach, Anirban Mukhopadhyay
On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection
Sangha Park, Jisoo Mok, Dahuin Jung, Saehyung Lee, Sungroh Yoon
Deepfake Detection: Leveraging the Power of 2D and 3D CNN Ensembles
Aagam Bakliwal, Amit D. Joshi
HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous Graph Neural Networks
Yihong Ma, Ning Yan, Jiayu Li, Masood Mortazavi, Nitesh V. Chawla
Enhancing Robotic Manipulation: Harnessing the Power of Multi-Task Reinforcement Learning and Single Life Reinforcement Learning in Meta-World
Ghadi Nehme, Ishan Sabane, Tejas Y. Deo
Exploring the Power of Graph Neural Networks in Solving Linear Optimization Problems
Chendi Qian, Didier Chételat, Christopher Morris
Harnessing the Power of LLMs: Evaluating Human-AI Text Co-Creation through the Lens of News Headline Generation
Zijian Ding, Alison Smith-Renner, Wenjuan Zhang, Joel R. Tetreault, Alejandro Jaimes