Code Model
Code models, large language models (LLMs) trained on vast code datasets, aim to automate various software engineering tasks, such as code generation, debugging, and understanding. Current research focuses on improving model accuracy and efficiency through techniques like synthetic data generation (e.g., using code edits or program diffs), reinforcement learning for performance optimization, and contrastive learning for robustness. These advancements are significant because they promise to increase programmer productivity, improve code quality and security, and enable new applications in software development and beyond.
Papers
November 5, 2024
October 16, 2024
October 4, 2024
October 3, 2024
September 6, 2024
July 11, 2024
July 4, 2024
July 2, 2024
June 7, 2024
June 5, 2024
May 24, 2024
May 7, 2024
May 5, 2024
April 29, 2024
April 3, 2024
February 24, 2024
February 20, 2024
February 19, 2024
January 16, 2024