Open Domain Dialogue
Open-domain dialogue research aims to create conversational AI systems capable of engaging in natural, coherent, and informative conversations on a wide range of topics. Current research heavily focuses on improving automatic evaluation metrics, often leveraging large language models (LLMs) to assess dialogue quality across multiple dimensions like coherence, relevance, and engagingness, and exploring techniques like contrastive learning and pairwise comparisons to enhance accuracy. These advancements are crucial for developing more robust and human-like conversational agents, with significant implications for various applications, including chatbots, virtual assistants, and human-computer interaction.
Papers
Emphasising Structured Information: Integrating Abstract Meaning Representation into LLMs for Enhanced Open-Domain Dialogue Evaluation
Bohao Yang, Kun Zhao, Chen Tang, Dong Liu, Liang Zhan, Chenghua Lin
PairEval: Open-domain Dialogue Evaluation with Pairwise Comparison
ChaeHun Park, Minseok Choi, Dohyun Lee, Jaegul Choo