Code Hallucination
Code hallucination refers to the generation of plausible-sounding but factually incorrect or incomplete code by large language models (LLMs), posing a significant risk in software development. Current research focuses on classifying hallucination types (e.g., involving API calls, package names, or logical errors), developing benchmarks for evaluating LLMs' susceptibility, and exploring mitigation strategies such as leveraging documentation or improving model training. Understanding and addressing code hallucinations is crucial for ensuring the reliability and security of software increasingly reliant on AI-assisted code generation.
Papers
October 13, 2024
August 14, 2024
July 13, 2024
July 5, 2024
June 12, 2024
May 19, 2024
April 30, 2024
March 3, 2024
February 17, 2023