Pedagogical Alignment
Pedagogical alignment in AI focuses on aligning large language models (LLMs) with effective teaching strategies, moving beyond simple question-answering to emulate human tutoring. Current research emphasizes developing algorithms and datasets to train LLMs to provide scaffolded instruction, identify student misconceptions (including through counterfactual reasoning), and generate appropriate questions, often leveraging techniques like learning from human preferences and multi-agent systems. This work is crucial for creating effective AI-powered educational tools, improving learning outcomes, and informing the responsible development of AI in education.
Papers
Towards the Pedagogical Steering of Large Language Models for Tutoring: A Case Study with Modeling Productive Failure
Romain Puech, Jakub Macina, Julia Chatain, Mrinmaya Sachan, Manu Kapur
A LLM-Powered Automatic Grading Framework with Human-Level Guidelines Optimization
Yucheng Chu, Hang Li, Kaiqi Yang, Harry Shomer, Hui Liu, Yasemin Copur-Gencturk, Jiliang Tang