Multi Turn Dialogue
Multi-turn dialogue research focuses on enabling large language models (LLMs) to engage in natural, coherent, and contextually relevant conversations spanning multiple turns. Current research emphasizes improving LLM performance in multi-turn settings through techniques like reinforcement learning from human feedback (RLHF), knowledge distillation, and novel masking strategies to optimize both accuracy and efficiency. This area is crucial for advancing human-computer interaction, creating more sophisticated conversational agents for various applications, and developing robust benchmarks for evaluating LLMs' abilities in complex, dynamic dialogues.
Papers
TransESC: Smoothing Emotional Support Conversation via Turn-Level State Transition
Weixiang Zhao, Yanyan Zhao, Shilong Wang, Bing Qin
VicunaNER: Zero/Few-shot Named Entity Recognition using Vicuna
Bin Ji
Out-of-Domain Intent Detection Considering Multi-Turn Dialogue Contexts
Hao Lang, Yinhe Zheng, Binyuan Hui, Fei Huang, Yongbin Li
FCC: Fusing Conversation History and Candidate Provenance for Contextual Response Ranking in Dialogue Systems
Zihao Wang, Eugene Agichtein, Jinho Choi
Dialog act guided contextual adapter for personalized speech recognition
Feng-Ju Chang, Thejaswi Muniyappa, Kanthashree Mysore Sathyendra, Kai Wei, Grant P. Strimel, Ross McGowan