Programming Task
Research on programming tasks focuses on leveraging large language models (LLMs) to automate various aspects of software development, from code generation and completion to assessment and feedback provision. Current efforts concentrate on improving LLMs' ability to handle complex tasks, evaluating their performance using diverse benchmarks and metrics, and developing more robust and interpretable models, often employing techniques like multi-task learning, retrieval augmented generation, and modular reasoning. This research is significant for advancing AI-assisted software engineering, improving programming education, and potentially transforming software development practices through increased efficiency and automation.
Papers
CursorCore: Assist Programming through Aligning Anything
Hao Jiang, Qi Liu, Rui Li, Shengyu Ye, Shijin Wang
Students' Perceptions and Use of Generative AI Tools for Programming Across Different Computing Courses
Hieke Keuning, Isaac Alpizar-Chacon, Ioanna Lykourentzou, Lauren Beehler, Christian Köppe, Imke de Jong, Sergey Sosnovsky