Task Oriented Dialogue

Task-oriented dialogue (TOD) research focuses on building conversational agents that can effectively complete specific user tasks, such as making reservations or providing information. Current research emphasizes improving dialogue state tracking (DST) accuracy and robustness, particularly using large language models (LLMs) and techniques like function calling and in-context learning, while also addressing challenges like handling clarification questions, managing multi-user interactions, and mitigating biases and unsafe responses. These advancements are significant for improving the efficiency and user experience of virtual assistants and other conversational AI applications.

Papers