Pre Trained Code Model
Pre-trained code models leverage large code datasets to learn representations of code, enabling improved performance on various downstream tasks like code generation, bug detection, and code search. Current research emphasizes improving data quality for pre-training, exploring different model architectures (including Transformer-based models and those incorporating graph neural networks), and investigating efficient fine-tuning strategies such as prompt tuning to reduce computational costs. These advancements are significantly impacting software engineering by automating tasks, improving code quality, and accelerating software development.
Papers
September 3, 2024
May 31, 2024
April 24, 2024
February 24, 2024
January 19, 2024
December 8, 2023
November 16, 2023
September 26, 2023
September 14, 2023
June 19, 2023
May 8, 2023
April 11, 2023
February 8, 2023
December 20, 2022
November 21, 2022
October 10, 2022
July 24, 2022
May 25, 2022
February 14, 2022