Task Oriented Dialogue System
Task-oriented dialogue systems (TODS) aim to build conversational agents that can effectively complete specific user tasks. Current research focuses on improving robustness and efficiency, particularly through the integration of large language models (LLMs) for tasks like intent detection, dialogue state tracking, and response generation, often employing techniques like contrastive learning and chain-of-thought prompting. These advancements are crucial for creating more natural and effective human-computer interactions across various applications, from virtual assistants to customer service chatbots, and are driving significant progress in areas like data augmentation and efficient model training. Furthermore, research emphasizes improving evaluation methodologies, including the incorporation of user feedback and addressing biases in model outputs.
Papers
Towards LLM-driven Dialogue State Tracking
Yujie Feng, Zexin Lu, Bo Liu, Liming Zhan, Xiao-Ming Wu
Dual-Feedback Knowledge Retrieval for Task-Oriented Dialogue Systems
Tianyuan Shi, Liangzhi Li, Zijian Lin, Tao Yang, Xiaojun Quan, Qifan Wang
Turn-Level Active Learning for Dialogue State Tracking
Zihan Zhang, Meng Fang, Fanghua Ye, Ling Chen, Mohammad-Reza Namazi-Rad