Task Oriented Dialogue
Task-oriented dialogue (TOD) research focuses on building conversational agents that can effectively complete specific user tasks, such as making reservations or providing information. Current research emphasizes improving dialogue state tracking (DST) accuracy and robustness, particularly using large language models (LLMs) and techniques like function calling and in-context learning, while also addressing challenges like handling clarification questions, managing multi-user interactions, and mitigating biases and unsafe responses. These advancements are significant for improving the efficiency and user experience of virtual assistants and other conversational AI applications.
Papers
Improving Generalization in Task-oriented Dialogues with Workflows and Action Plans
Stefania Raimondo, Christopher Pal, Xiaotian Liu, David Vazquez, Hector Palacios
EmoUS: Simulating User Emotions in Task-Oriented Dialogues
Hsien-Chin Lin, Shutong Feng, Christian Geishauser, Nurul Lubis, Carel van Niekerk, Michael Heck, Benjamin Ruppik, Renato Vukovic, Milica Gašić