Dialogue Policy
Dialogue policy research focuses on designing algorithms that enable agents, such as robots or chatbots, to engage in effective and goal-oriented conversations. Current research emphasizes improving the efficiency and robustness of reinforcement learning (RL) methods for dialogue policy learning, often incorporating techniques like reward shaping, adversarial training, and the use of transformer-based architectures to generate more diverse and contextually appropriate responses. These advancements aim to create more natural and engaging conversational agents for applications ranging from human-robot interaction to task-oriented dialogue systems, ultimately improving the user experience and system performance.
Papers
October 21, 2024
October 17, 2024
September 26, 2024
June 21, 2024
June 20, 2024
May 31, 2024
May 30, 2024
November 1, 2023
September 5, 2023
September 1, 2023
July 13, 2023
June 1, 2023
May 6, 2023
May 5, 2023
April 24, 2023
November 2, 2022
August 19, 2022
July 24, 2022
June 6, 2022