LLM Based Agent
LLM-based agents are software programs that leverage large language models (LLMs) to perform complex tasks autonomously, often interacting with external tools and environments. Current research emphasizes improving agent safety and reliability through techniques like memory management, error correction, and the development of unified frameworks for agent design and evaluation, including benchmarks for assessing performance across diverse tasks and environments. This field is significant because it pushes the boundaries of AI capabilities, enabling applications in diverse areas such as social simulation, software engineering, and healthcare, while also raising important questions about AI safety and security.
Papers
What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents
Mingyu Jin, Beichen Wang, Zhaoqian Xue, Suiyuan Zhu, Wenyue Hua, Hua Tang, Kai Mei, Mengnan Du, Yongfeng Zhang
Large Language Model-based Human-Agent Collaboration for Complex Task Solving
Xueyang Feng, Zhi-Yuan Chen, Yujia Qin, Yankai Lin, Xu Chen, Zhiyuan Liu, Ji-Rong Wen
Shall We Team Up: Exploring Spontaneous Cooperation of Competing LLM Agents
Zengqing Wu, Run Peng, Shuyuan Zheng, Qianying Liu, Xu Han, Brian Inhyuk Kwon, Makoto Onizuka, Shaojie Tang, Chuan Xiao
WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment
Hao Tang, Darren Key, Kevin Ellis
Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents
Zelong Li, Wenyue Hua, Hao Wang, He Zhu, Yongfeng Zhang
Computational Experiments Meet Large Language Model Based Agents: A Survey and Perspective
Qun Ma, Xiao Xue, Deyu Zhou, Xiangning Yu, Donghua Liu, Xuwen Zhang, Zihan Zhao, Yifan Shen, Peilin Ji, Juanjuan Li, Gang Wang, Wanpeng Ma