Dialogue Representation
Dialogue representation research focuses on creating effective computational models of conversations, aiming to capture the nuanced meaning and context within multi-turn interactions. Current efforts concentrate on improving pre-trained language models for dialogue understanding through techniques like contrastive learning, self-supervised learning, and the incorporation of commonsense knowledge, often employing transformer-based architectures. These advancements are crucial for improving various applications, including task-oriented dialogue systems, emotion recognition in conversations, and abstractive dialogue summarization, ultimately leading to more natural and human-like interactions with AI.
Papers
March 2, 2024
October 19, 2023
October 12, 2023
June 17, 2023
September 14, 2022
September 1, 2022
May 29, 2022
May 26, 2022
April 26, 2022
April 23, 2022
April 15, 2022