Verification Task
Verification tasks in various fields aim to ensure the accuracy, reliability, and trustworthiness of systems and models. Current research focuses on developing robust verification methods across diverse domains, including image and document analysis (using recurrent and transformer-based models), logical reasoning (employing LLMs and symbolic methods), and autonomous systems (leveraging formal methods and data-driven approaches). These advancements are crucial for enhancing the safety and dependability of AI systems, robotic applications, and other technologies where reliable performance is paramount.
Papers
Robot Swarms as Hybrid Systems: Modelling and Verification
Stefan Schupp, Francesco Leofante, Leander Behr, Erika Ábrahám, Armando Taccella
Verification of Sigmoidal Artificial Neural Networks using iSAT
Dominik Grundt, Sorin Liviu Jurj, Willem Hagemann, Paul Kröger, Martin Fränzle
Differentiable Logics for Neural Network Training and Verification
Natalia Slusarz, Ekaterina Komendantskaya, Matthew L. Daggitt, Robert Stewart