LLM Based
Large language model (LLM)-based systems are rapidly advancing, aiming to improve efficiency and accuracy across diverse applications. Current research focuses on optimizing LLM performance through techniques like multi-agent systems, adaptive reward model selection (e.g., using multi-armed bandits), and integrating LLMs with symbolic methods for enhanced reasoning and planning capabilities. This work is significant because it addresses limitations of existing LLMs, such as inconsistency, hallucination, and computational cost, leading to more robust and reliable AI systems for various domains including healthcare, robotics, and software engineering.
Papers
AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents
Ke Yang, Yao Liu, Sapana Chaudhary, Rasool Fakoor, Pratik Chaudhari, George Karypis, Huzefa Rangwala
Jailbreaking LLM-Controlled Robots
Alexander Robey, Zachary Ravichandran, Vijay Kumar, Hamed Hassani, George J. Pappas
LLMOPT: Learning to Define and Solve General Optimization Problems from Scratch
Caigao Jiang, Xiang Shu, Hong Qian, Xingyu Lu, Jun Zhou, Aimin Zhou, Yang Yu
SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models
Yi Wu, Zikang Xiong, Yiran Hu, Shreyash S. Iyengar, Nan Jiang, Aniket Bera, Lin Tan, Suresh Jagannathan
Fast and Accurate Task Planning using Neuro-Symbolic Language Models and Multi-level Goal Decomposition
Minseo Kwon, Yaesol Kim, Young J. Kim