Computer Science
Computer science research currently focuses heavily on leveraging large language models (LLMs) for various tasks, including code generation, automated assessment of student work, and enhancing educational experiences. This involves evaluating LLMs' performance across diverse computer science subfields and exploring effective pedagogical strategies for their integration into curricula. The impact of these models is being assessed across multiple disciplines, revealing both their potential to improve efficiency and the need for careful consideration of fairness, explainability, and potential biases. Ultimately, this research aims to understand and optimize the role of LLMs in advancing both computer science education and research.
Papers
October 13, 2022
May 24, 2022
May 14, 2022
March 30, 2022
January 26, 2022
December 30, 2021