LLM Based
Large language model (LLM)-based systems are rapidly advancing, aiming to improve efficiency and accuracy across diverse applications. Current research focuses on optimizing LLM performance through techniques like multi-agent systems, adaptive reward model selection (e.g., using multi-armed bandits), and integrating LLMs with symbolic methods for enhanced reasoning and planning capabilities. This work is significant because it addresses limitations of existing LLMs, such as inconsistency, hallucination, and computational cost, leading to more robust and reliable AI systems for various domains including healthcare, robotics, and software engineering.
Papers
MALADE: Orchestration of LLM-powered Agents with Retrieval Augmented Generation for Pharmacovigilance
Jihye Choi, Nils Palumbo, Prasad Chalasani, Matthew M. Engelhard, Somesh Jha, Anivarya Kumar, David Page
TrustNavGPT: Modeling Uncertainty to Improve Trustworthiness of Audio-Guided LLM-Based Robot Navigation
Xingpeng Sun, Yiran Zhang, Xindi Tang, Amrit Singh Bedi, Aniket Bera
Integrating Large Language Models and Knowledge Graphs for Extraction and Validation of Textual Test Data
Antonio De Santis, Marco Balduini, Federico De Santis, Andrea Proia, Arsenio Leo, Marco Brambilla, Emanuele Della Valle
Dialog Flow Induction for Constrainable LLM-Based Chatbots
Stuti Agrawal, Nishi Uppuluri, Pranav Pillai, Revanth Gangi Reddy, Zoey Li, Gokhan Tur, Dilek Hakkani-Tur, Heng Ji
Evaluating the Impact of Advanced LLM Techniques on AI-Lecture Tutors for a Robotics Course
Sebastian Kahl, Felix Löffler, Martin Maciol, Fabian Ridder, Marius Schmitz, Jennifer Spanagel, Jens Wienkamp, Christopher Burgahn, Malte Schilling
Prompt Refinement or Fine-tuning? Best Practices for using LLMs in Computational Social Science Tasks
Anders Giovanni Møller, Luca Maria Aiello