Level Test
Level testing encompasses a broad range of techniques for evaluating the performance and reliability of systems, from software and AI models to physical structures and robotic systems. Current research focuses on developing automated testing frameworks, leveraging AI models (like LLMs and CNNs) for test generation, analysis, and evaluation, and employing techniques such as Structure from Motion for high-precision 3D modeling in physical testing. These advancements aim to improve the efficiency, accuracy, and robustness of testing processes across diverse scientific and engineering domains, ultimately leading to more reliable and trustworthy systems.
Papers
Toward Sufficient Statistical Power in Algorithmic Bias Assessment: A Test for ABROCA
Conrad Borchers
DRIVINGVQA: Analyzing Visual Chain-of-Thought Reasoning of Vision Language Models in Real-World Scenarios with Driving Theory Tests
Charles Corbière, Simon Roburin, Syrielle Montariol, Antoine Bosselut, Alexandre Alahi
Tests for model misspecification in simulation-based inference: from local distortions to global model checks
Noemi Anau Montel, James Alvey, Christoph Weniger
Helping LLMs Improve Code Generation Using Feedback from Testing and Static Analysis
Greta Dolcetti, Vincenzo Arceri, Eleonora Iotti, Sergio Maffeis, Agostino Cortesi, Enea Zaffanella