Programming Education
Programming education research currently focuses on leveraging large language models (LLMs) like GPT-3.5 and GPT-4 to enhance learning experiences. Key areas of investigation include using LLMs for personalized feedback, automated code debugging assistance, and the generation of educational resources such as practice problems and explanations. Studies are evaluating the effectiveness of these AI-powered tools, examining both their benefits (improved learning outcomes for some students) and potential drawbacks (over-reliance, decreased engagement). This research aims to optimize the integration of LLMs into programming education, ultimately improving learning outcomes and addressing the challenges of scaling support for large numbers of students.
Papers
Understanding Help-Seeking Behavior of Students Using LLMs vs. Web Search for Writing SQL Queries
Harsh Kumar, Mohi Reza, Jeb Mitchell, Ilya Musabirov, Lisa Zhang, Michael Liut
PyMarian: Fast Neural Machine Translation and Evaluation in Python
Thamme Gowda, Roman Grundkiewicz, Elijah Rippeth, Matt Post, Marcin Junczys-Dowmunt