Test Generation
Test generation aims to automate the creation of effective test cases, crucial for ensuring software and system reliability. Current research heavily utilizes large language models (LLMs) and generative adversarial networks (GANs), often incorporating techniques like reinforcement learning and coverage-guided approaches to improve test quality and efficiency, particularly for complex systems like cyber-physical systems and AI agents. This field is significant because automated test generation can drastically reduce the time and cost of software development and verification, leading to higher-quality and more reliable systems across various domains.
Papers
Reinforcement Learning from Automatic Feedback for High-Quality Unit Test Generation
Benjamin Steenhoek, Michele Tufano, Neel Sundaresan, Alexey Svyatkovskiy
Design choices made by LLM-based test generators prevent them from finding bugs
Noble Saji Mathews, Meiyappan Nagappan
GenX: Mastering Code and Test Generation with Execution Feedback
Nan Wang, Yafei Liu, Chen Chen, Haonan Lu
Evaluating the Ability of Large Language Models to Generate Verifiable Specifications in VeriFast
Wen Fan, Marilyn Rego, Xin Hu, Sanya Dod, Zhaorui Ni, Danning Xie, Jenna DiVincenzo, Lin Tan
Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test Generation: An Empirical Study
André Storhaug, Jingyue Li