Multi Turn Dialogue
Multi-turn dialogue research focuses on enabling large language models (LLMs) to engage in natural, coherent, and contextually relevant conversations spanning multiple turns. Current research emphasizes improving LLM performance in multi-turn settings through techniques like reinforcement learning from human feedback (RLHF), knowledge distillation, and novel masking strategies to optimize both accuracy and efficiency. This area is crucial for advancing human-computer interaction, creating more sophisticated conversational agents for various applications, and developing robust benchmarks for evaluating LLMs' abilities in complex, dynamic dialogues.
Papers
FCC: Fusing Conversation History and Candidate Provenance for Contextual Response Ranking in Dialogue Systems
Zihao Wang, Eugene Agichtein, Jinho Choi
Dialog act guided contextual adapter for personalized speech recognition
Feng-Ju Chang, Thejaswi Muniyappa, Kanthashree Mysore Sathyendra, Kai Wei, Grant P. Strimel, Ross McGowan