LLM Based Programming Assistant
Large language model (LLM)-based programming assistants aim to improve coding efficiency and reduce errors by generating code, providing explanations, and assisting with debugging. Current research focuses on enhancing LLM performance through techniques like Minimum Bayes Risk decoding and iterative self-training, addressing issues like code leakage and adversarial attacks through prompt engineering and reinforcement learning, and exploring applications in diverse areas such as cybersecurity and education. These advancements hold significant potential for improving software development practices, enhancing code security, and transforming how programming is taught and learned.
Papers
October 3, 2024
September 3, 2024
July 12, 2024
April 13, 2024
April 8, 2024
January 20, 2024
December 23, 2023
December 9, 2023
October 3, 2023