Dialogue Assessment

Dialogue assessment focuses on automatically evaluating the quality and effectiveness of conversations, particularly in the context of chatbot development and second language learning. Current research emphasizes developing robust and interpretable assessment frameworks, often leveraging large language models (LLMs) and incorporating both micro-level linguistic features (e.g., word choice, backchannels) and higher-level dialogue characteristics (e.g., topic management, constructiveness). This work is crucial for improving the design and evaluation of conversational AI systems and for creating more effective language learning tools, ultimately leading to more human-like and beneficial interactions.

Papers