Dialogue Assessment
Dialogue assessment focuses on automatically evaluating the quality and effectiveness of conversations, particularly in the context of chatbot development and second language learning. Current research emphasizes developing robust and interpretable assessment frameworks, often leveraging large language models (LLMs) and incorporating both micro-level linguistic features (e.g., word choice, backchannels) and higher-level dialogue characteristics (e.g., topic management, constructiveness). This work is crucial for improving the design and evaluation of conversational AI systems and for creating more effective language learning tools, ultimately leading to more human-like and beneficial interactions.
Papers
July 9, 2024
June 20, 2024
June 5, 2024
December 24, 2023
October 25, 2023
December 18, 2022