Multi Turn Dialogue
Multi-turn dialogue research focuses on enabling large language models (LLMs) to engage in natural, coherent, and contextually relevant conversations spanning multiple turns. Current research emphasizes improving LLM performance in multi-turn settings through techniques like reinforcement learning from human feedback (RLHF), knowledge distillation, and novel masking strategies to optimize both accuracy and efficiency. This area is crucial for advancing human-computer interaction, creating more sophisticated conversational agents for various applications, and developing robust benchmarks for evaluating LLMs' abilities in complex, dynamic dialogues.
Papers
BotChat: Evaluating LLMs' Capabilities of Having Multi-Turn Dialogues
Haodong Duan, Jueqi Wei, Chonghua Wang, Hongwei Liu, Yixiao Fang, Songyang Zhang, Dahua Lin, Kai Chen
Explicit Alignment and Many-to-many Entailment Based Reasoning for Conversational Machine Reading
Yangyang Luo, Shiyu Tian, Caixia Yuan, Xiaojie Wang
TransESC: Smoothing Emotional Support Conversation via Turn-Level State Transition
Weixiang Zhao, Yanyan Zhao, Shilong Wang, Bing Qin
VicunaNER: Zero/Few-shot Named Entity Recognition using Vicuna
Bin Ji
Out-of-Domain Intent Detection Considering Multi-Turn Dialogue Contexts
Hao Lang, Yinhe Zheng, Binyuan Hui, Fei Huang, Yongbin Li