LLM Based Code Generation
Large language model (LLM)-based code generation aims to automatically translate natural language descriptions into executable code, boosting software development efficiency. Current research heavily focuses on mitigating issues like "hallucinations" (incorrect or nonsensical code) through techniques such as retrieval augmented generation (RAG) and iterative self-correction, often employing encoder-decoder architectures and reinforcement learning. These advancements are significant because they address critical limitations in LLM code generation, paving the way for more reliable and robust automated programming tools with implications for both software engineering productivity and the broader field of AI.
Papers
September 30, 2024
September 26, 2024
September 1, 2024
August 28, 2024
August 25, 2024
August 11, 2024
April 24, 2024
April 1, 2024
March 24, 2024
December 7, 2023
November 27, 2023
November 10, 2023
September 3, 2023