Dialogue Task

Dialogue task research focuses on enabling computers to engage in meaningful, multi-turn conversations, achieving both fluency and task completion. Current efforts concentrate on improving Large Language Models (LLMs) through techniques like dual-process planning, preference learning from process feedback, and multi-task pre-training across dialogue management, generation, and comprehension. These advancements aim to enhance LLMs' understanding of context, pragmatics, and implicit meaning, leading to more robust and human-like conversational agents with applications in diverse fields such as healthcare and legal consultation. The ultimate goal is to create systems that not only generate appropriate responses but also proactively steer conversations towards desired outcomes.

Papers