Task Oriented Dialogue
Task-oriented dialogue (TOD) research focuses on building conversational agents that can effectively complete specific user tasks, such as making reservations or providing information. Current research emphasizes improving dialogue state tracking (DST) accuracy and robustness, particularly using large language models (LLMs) and techniques like function calling and in-context learning, while also addressing challenges like handling clarification questions, managing multi-user interactions, and mitigating biases and unsafe responses. These advancements are significant for improving the efficiency and user experience of virtual assistants and other conversational AI applications.
Papers
Task Oriented Dialogue as a Catalyst for Self-Supervised Automatic Speech Recognition
David M. Chan, Shalini Ghosh, Hitesh Tulsiani, Ariya Rastrow, Björn Hoffmeister
Are LLMs Robust for Spoken Dialogues?
Seyed Mahed Mousavi, Gabriel Roccabruna, Simone Alghisi, Massimo Rizzoli, Mirco Ravanelli, Giuseppe Riccardi