Dialogue Generation
Dialogue generation focuses on creating natural and engaging conversational agents, aiming to improve the fluency, coherence, and personalization of AI-driven conversations. Current research emphasizes mitigating limitations like hallucinations and biases, improving efficiency through techniques like knowledge distillation and retrieval-augmented generation, and enhancing personalization using various model architectures including LLMs, diffusion models, and encoder-decoder models. These advancements have significant implications for various applications, including chatbots, virtual assistants, and therapeutic AI, improving human-computer interaction and potentially impacting fields like mental health support and education.
Papers
DEMO: Reframing Dialogue Interaction with Fine-grained Element Modeling
Minzheng Wang, Xinghua Zhang, Kun Chen, Nan Xu, Haiyang Yu, Fei Huang, Wenji Mao, Yongbin Li
Multi-Party Supervised Fine-tuning of Language Models for Multi-Party Dialogue Generation
Xiaoyu Wang, Ningyuan Xi, Teng Chen, Qingqing Gu, Yue Zhao, Xiaokai Chen, Zhonglin Jiang, Yong Chen, Luo Ji