Multi Turn Dialogue
Multi-turn dialogue research focuses on enabling large language models (LLMs) to engage in natural, coherent, and contextually relevant conversations spanning multiple turns. Current research emphasizes improving LLM performance in multi-turn settings through techniques like reinforcement learning from human feedback (RLHF), knowledge distillation, and novel masking strategies to optimize both accuracy and efficiency. This area is crucial for advancing human-computer interaction, creating more sophisticated conversational agents for various applications, and developing robust benchmarks for evaluating LLMs' abilities in complex, dynamic dialogues.
Papers
LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing
Wei-Ge Chen, Irina Spiridonova, Jianwei Yang, Jianfeng Gao, Chunyuan Li
SoulChat: Improving LLMs' Empathy, Listening, and Comfort Abilities through Fine-tuning with Multi-turn Empathy Conversations
Yirong Chen, Xiaofen Xing, Jingkai Lin, Huimin Zheng, Zhenyu Wang, Qi Liu, Xiangmin Xu
BotChat: Evaluating LLMs' Capabilities of Having Multi-Turn Dialogues
Haodong Duan, Jueqi Wei, Chonghua Wang, Hongwei Liu, Yixiao Fang, Songyang Zhang, Dahua Lin, Kai Chen
Explicit Alignment and Many-to-many Entailment Based Reasoning for Conversational Machine Reading
Yangyang Luo, Shiyu Tian, Caixia Yuan, Xiaojie Wang