LLM Based Agent
LLM-based agents are software programs that leverage large language models (LLMs) to perform complex tasks autonomously, often interacting with external tools and environments. Current research emphasizes improving agent safety and reliability through techniques like memory management, error correction, and the development of unified frameworks for agent design and evaluation, including benchmarks for assessing performance across diverse tasks and environments. This field is significant because it pushes the boundaries of AI capabilities, enabling applications in diverse areas such as social simulation, software engineering, and healthcare, while also raising important questions about AI safety and security.
Papers
Steve-Eye: Equipping LLM-based Embodied Agents with Visual Perception in Open Worlds
Sipeng Zheng, Jiazheng Liu, Yicheng Feng, Zongqing Lu
ToolChain*: Efficient Action Space Navigation in Large Language Models with A* Search
Yuchen Zhuang, Xiang Chen, Tong Yu, Saayan Mitra, Victor Bursztyn, Ryan A. Rossi, Somdeb Sarkhel, Chao Zhang