Interaction Evaluation
Interaction evaluation assesses the effectiveness of human-computer interactions, particularly focusing on how humans collaborate with AI systems, such as large language models (LLMs). Current research emphasizes developing automated evaluation frameworks that mimic human behavior and preferences, often leveraging LLMs themselves as evaluation agents, to overcome the limitations and high cost of human-based assessments. This field is crucial for improving the design and usability of AI systems across diverse applications, from question answering and co-writing to scientific data analysis and educational tools, by ensuring that AI systems are not only accurate but also effectively support human needs and workflows.
Papers
November 13, 2024
October 21, 2024
August 24, 2024
May 10, 2024
June 7, 2023
March 30, 2023
March 11, 2023
December 19, 2022