Multilingual Task Oriented Dialogue

Multilingual task-oriented dialogue (MTD) research aims to build conversational AI systems capable of handling diverse languages and completing user requests across various domains. Current efforts focus on improving data efficiency through techniques like in-context learning and data augmentation, often leveraging large language models (LLMs) alongside fine-tuned pretrained language models (PLMs), while also addressing performance disparities across languages and developing robust evaluation methods. This field is crucial for broadening access to AI-powered services globally, requiring further development of multilingual datasets and improved understanding of the factors influencing cross-lingual performance.

Papers