Test Generation
Test generation aims to automate the creation of effective test cases, crucial for ensuring software and system reliability. Current research heavily utilizes large language models (LLMs) and generative adversarial networks (GANs), often incorporating techniques like reinforcement learning and coverage-guided approaches to improve test quality and efficiency, particularly for complex systems like cyber-physical systems and AI agents. This field is significant because automated test generation can drastically reduce the time and cost of software development and verification, leading to higher-quality and more reliable systems across various domains.
Papers
Evaluating the Ability of Large Language Models to Generate Verifiable Specifications in VeriFast
Marilyn Rego, Wen Fan, Xin Hu, Sanya Dod, Zhaorui Ni, Danning Xie, Jenna DiVincenzo, Lin Tan
Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test Generation: An Empirical Study
André Storhaug, Jingyue Li