LLM Based Agent
LLM-based agents are software programs that leverage large language models (LLMs) to perform complex tasks autonomously, often interacting with external tools and environments. Current research emphasizes improving agent safety and reliability through techniques like memory management, error correction, and the development of unified frameworks for agent design and evaluation, including benchmarks for assessing performance across diverse tasks and environments. This field is significant because it pushes the boundaries of AI capabilities, enabling applications in diverse areas such as social simulation, software engineering, and healthcare, while also raising important questions about AI safety and security.
Papers
November 7, 2024
October 29, 2024
October 28, 2024
October 23, 2024
October 22, 2024
October 18, 2024
October 17, 2024
October 10, 2024
October 9, 2024
October 6, 2024
October 3, 2024
September 30, 2024
September 25, 2024
September 23, 2024
September 20, 2024
September 19, 2024
September 18, 2024