Reading Comprehension
Reading comprehension research aims to understand how humans and machines process text to extract meaning and answer questions, focusing on improving both accuracy and robustness. Current research emphasizes developing and evaluating large language models (LLMs) for question answering, exploring techniques like prompt engineering, data augmentation (including adversarial examples and synthetic data generation), and multimodal approaches integrating eye-tracking and EEG data to better understand the cognitive processes involved. These advancements have implications for educational assessment, information retrieval, and the development of more human-like AI systems, particularly in addressing challenges like knowledge conflicts and out-of-distribution detection.
Papers
Choose Your Own Adventure: Interactive E-Books to Improve Word Knowledge and Comprehension Skills
Stephanie Day, Jin K. Hwang, Tracy Arner, Danielle McNamara, Carol Connor
Predicting Learning Performance with Large Language Models: A Study in Adult Literacy
Liang Zhang, Jionghao Lin, Conrad Borchers, John Sabatini, John Hollander, Meng Cao, Xiangen Hu