Multi Turn Dialogue
Multi-turn dialogue research focuses on enabling large language models (LLMs) to engage in natural, coherent, and contextually relevant conversations spanning multiple turns. Current research emphasizes improving LLM performance in multi-turn settings through techniques like reinforcement learning from human feedback (RLHF), knowledge distillation, and novel masking strategies to optimize both accuracy and efficiency. This area is crucial for advancing human-computer interaction, creating more sophisticated conversational agents for various applications, and developing robust benchmarks for evaluating LLMs' abilities in complex, dynamic dialogues.
Papers
January 2, 2025
December 27, 2024
December 20, 2024
December 19, 2024
December 11, 2024
November 21, 2024
November 19, 2024
November 16, 2024
November 15, 2024
November 11, 2024
November 9, 2024
November 6, 2024
October 31, 2024
October 29, 2024
October 28, 2024
October 25, 2024
October 23, 2024
October 18, 2024
October 15, 2024
October 12, 2024