LLM Based
Large language model (LLM)-based systems are rapidly advancing, aiming to improve efficiency and accuracy across diverse applications. Current research focuses on optimizing LLM performance through techniques like multi-agent systems, adaptive reward model selection (e.g., using multi-armed bandits), and integrating LLMs with symbolic methods for enhanced reasoning and planning capabilities. This work is significant because it addresses limitations of existing LLMs, such as inconsistency, hallucination, and computational cost, leading to more robust and reliable AI systems for various domains including healthcare, robotics, and software engineering.
Papers
Richelieu: Self-Evolving LLM-Based Agents for AI Diplomacy
Zhenyu Guan, Xiangyu Kong, Fangwei Zhong, Yizhou Wang
FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making
Yangyang Yu, Zhiyuan Yao, Haohang Li, Zhiyang Deng, Yupeng Cao, Zhi Chen, Jordan W. Suchow, Rong Liu, Zhenyu Cui, Denghui Zhang, Koduvayur Subbalakshmi, Guojun Xiong, Yueru He, Jimin Huang, Dong Li, Qianqian Xie
Asynchronous Large Language Model Enhanced Planner for Autonomous Driving
Yuan Chen, Zi-han Ding, Ziqin Wang, Yan Wang, Lijun Zhang, Si Liu
Enhancing the LLM-Based Robot Manipulation Through Human-Robot Collaboration
Haokun Liu, Yaonan Zhu, Kenji Kato, Atsushi Tsukahara, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa