LLM Based Agent
LLM-based agents are software programs that leverage large language models (LLMs) to perform complex tasks autonomously, often interacting with external tools and environments. Current research emphasizes improving agent safety and reliability through techniques like memory management, error correction, and the development of unified frameworks for agent design and evaluation, including benchmarks for assessing performance across diverse tasks and environments. This field is significant because it pushes the boundaries of AI capabilities, enabling applications in diverse areas such as social simulation, software engineering, and healthcare, while also raising important questions about AI safety and security.
Papers
August 5, 2024
August 2, 2024
August 1, 2024
July 23, 2024
July 17, 2024
July 16, 2024
July 14, 2024
July 10, 2024
July 2, 2024
July 1, 2024
June 26, 2024
June 21, 2024
June 20, 2024
June 9, 2024
June 6, 2024
June 3, 2024
June 2, 2024
April 29, 2024
March 31, 2024