LLM Based
Large language model (LLM)-based systems are rapidly advancing, aiming to improve efficiency and accuracy across diverse applications. Current research focuses on optimizing LLM performance through techniques like multi-agent systems, adaptive reward model selection (e.g., using multi-armed bandits), and integrating LLMs with symbolic methods for enhanced reasoning and planning capabilities. This work is significant because it addresses limitations of existing LLMs, such as inconsistency, hallucination, and computational cost, leading to more robust and reliable AI systems for various domains including healthcare, robotics, and software engineering.
Papers
DELTA: Decomposed Efficient Long-Term Robot Task Planning using Large Language Models
Yuchen Liu, Luigi Palmieri, Sebastian Koch, Ilche Georgievski, Marco Aiello
Bias Amplification in Language Model Evolution: An Iterated Learning Perspective
Yi Ren, Shangmin Guo, Linlu Qiu, Bailin Wang, Danica J. Sutherland
Detecting Hallucination and Coverage Errors in Retrieval Augmented Generation for Controversial Topics
Tyler A. Chang, Katrin Tomanek, Jessica Hoffmann, Nithum Thain, Erin van Liemt, Kathleen Meier-Hellstern, Lucas Dixon
Search-based Optimisation of LLM Learning Shots for Story Point Estimation
Vali Tawosi, Salwa Alamir, Xiaomo Liu
When is Tree Search Useful for LLM Planning? It Depends on the Discriminator
Ziru Chen, Michael White, Raymond Mooney, Ali Payani, Yu Su, Huan Sun
Decomposition for Enhancing Attention: Improving LLM-based Text-to-SQL through Workflow Paradigm
Yuanzhen Xie, Xinzhou Jin, Tao Xie, MingXiong Lin, Liang Chen, Chenyun Yu, Lei Cheng, ChengXiang Zhuo, Bo Hu, Zang Li