Programming Exercise
Research on programming exercises is exploring how large language models (LLMs), such as GPT-3 and GPT-4, can automate the creation and assessment of these exercises, personalize learning experiences, and provide more efficient feedback to students. Current studies focus on evaluating the quality and effectiveness of LLM-generated exercises, hints, and automated code reviews, often using quantitative metrics and qualitative analysis of student performance and perceptions. This work has significant implications for improving the efficiency and effectiveness of programming education, particularly in large introductory courses, by automating traditionally labor-intensive tasks and providing more timely and personalized feedback.
Papers
July 30, 2024
June 11, 2024
June 9, 2024
May 30, 2024
April 26, 2024
March 7, 2024
February 20, 2024
December 3, 2023
October 24, 2023