LLM Based Agent
LLM-based agents are software programs that leverage large language models (LLMs) to perform complex tasks autonomously, often interacting with external tools and environments. Current research emphasizes improving agent safety and reliability through techniques like memory management, error correction, and the development of unified frameworks for agent design and evaluation, including benchmarks for assessing performance across diverse tasks and environments. This field is significant because it pushes the boundaries of AI capabilities, enabling applications in diverse areas such as social simulation, software engineering, and healthcare, while also raising important questions about AI safety and security.
Papers
AutoGuide: Automated Generation and Selection of State-Aware Guidelines for Large Language Model Agents
Yao Fu, Dong-Ki Kim, Jaekyeom Kim, Sungryull Sohn, Lajanugen Logeswaran, Kyunghoon Bae, Honglak Lee
CleanAgent: Automating Data Standardization with LLM-based Agents
Danrui Qi, Jiannan Wang
TINA: Think, Interaction, and Action Framework for Zero-Shot Vision Language Navigation
Dingbang Li, Wenzhou Chen, Xin Lin
Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization
Wenqi Zhang, Ke Tang, Hai Wu, Mengna Wang, Yongliang Shen, Guiyang Hou, Zeqi Tan, Peng Li, Yueting Zhuang, Weiming Lu
BASES: Large-scale Web Search User Simulation with Large Language Model based Agents
Ruiyang Ren, Peng Qiu, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Hua Wu, Ji-Rong Wen, Haifeng Wang