Fuzz Testing

Fuzz testing is a dynamic software testing technique that involves feeding a system with malformed or unexpected inputs to uncover vulnerabilities and unexpected behaviors. Current research focuses on enhancing fuzzing's effectiveness through techniques like reinforcement learning, large language models (LLMs) for intelligent input generation and mutation, and novel coverage metrics to guide the testing process, with applications spanning software, hardware, and even large language models themselves. This approach is crucial for improving the security and reliability of complex systems, particularly in safety-critical domains like autonomous driving and medical applications, by identifying weaknesses before deployment.

Papers