LLM Based
Large language model (LLM)-based systems are rapidly advancing, aiming to improve efficiency and accuracy across diverse applications. Current research focuses on optimizing LLM performance through techniques like multi-agent systems, adaptive reward model selection (e.g., using multi-armed bandits), and integrating LLMs with symbolic methods for enhanced reasoning and planning capabilities. This work is significant because it addresses limitations of existing LLMs, such as inconsistency, hallucination, and computational cost, leading to more robust and reliable AI systems for various domains including healthcare, robotics, and software engineering.
Papers
DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution
Yang Yue, Yulin Wang, Bingyi Kang, Yizeng Han, Shenzhi Wang, Shiji Song, Jiashi Feng, Gao Huang
WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning
Zehan Qi, Xiao Liu, Iat Long Iong, Hanyu Lai, Xueqiao Sun, Xinyue Yang, Jiadai Sun, Yu Yang, Shuntian Yao, Tianjie Zhang, Wei Xu, Jie Tang, Yuxiao Dong
Towards Pedagogical LLMs with Supervised Fine Tuning for Computing Education
Alexandra Vassar, Jake Renzella, Emily Ross, Andrew Taylor
DynaSaur: Large Language Agents Beyond Predefined Actions
Dang Nguyen, Viet Dac Lai, Seunghyun Yoon, Ryan A. Rossi, Handong Zhao, Ruiyi Zhang, Puneet Mathur, Nedim Lipka, Yu Wang, Trung Bui, Franck Dernoncourt, Tianyi Zhou
Navigating the Unknown: A Chat-Based Collaborative Interface for Personalized Exploratory Tasks
Yingzhe Peng, Xiaoting Qin, Zhiyang Zhang, Jue Zhang, Qingwei Lin, Xu Yang, Dongmei Zhang, Saravan Rajmohan, Qi Zhang
EmbodiedRAG: Dynamic 3D Scene Graph Retrieval for Efficient and Scalable Robot Task Planning
Meghan Booker, Grayson Byrd, Bethany Kemp, Aurora Schmidt, Corban Rivera
LLMs are Highly-Constrained Biophysical Sequence Optimizers
Angelica Chen, Samuel D. Stanton, Robert G. Alberstein, Andrew M. Watkins, Richard Bonneau, Vladimir Gligorijevi, Kyunghyun Cho, Nathan C. Frey
Synergizing LLM Agents and Knowledge Graph for Socioeconomic Prediction in LBSN
Zhilun Zhou, Jingyang Fan, Yu Liu, Fengli Xu, Depeng Jin, Yong Li
LLM-based Optimization of Compound AI Systems: A Survey
Matthieu Lin, Jenny Sheng, Andrew Zhao, Shenzhi Wang, Yang Yue, Yiran Wu, Huan Liu, Jun Liu, Gao Huang, Yong-Jin Liu
Enhancing Trust and Safety in Digital Payments: An LLM-Powered Approach
Devendra Dahiphale, Naveen Madiraju, Justin Lin, Rutvik Karve, Monu Agrawal, Anant Modwal, Ramanan Balakrishnan, Shanay Shah, Govind Kaushal, Priya Mandawat, Prakash Hariramani, Arif Merchant (Google, Inc)
NetSafe: Exploring the Topological Safety of Multi-agent Networks
Miao Yu, Shilong Wang, Guibin Zhang, Junyuan Mao, Chenlong Yin, Qijiong Liu, Qingsong Wen, Kun Wang, Yang Wang
Improving Parallel Program Performance Through DSL-Driven Code Generation with LLM Optimizers
Anjiang Wei, Allen Nie, Thiago S. F. X. Teixeira, Rohan Yadav, Wonchan Lee, Ke Wang, Alex Aiken
Bayesian Concept Bottleneck Models with LLM Priors
Jean Feng, Avni Kothari, Luke Zier, Chandan Singh, Yan Shuo Tan