Stability Verification

Stability verification focuses on ensuring the robustness and reliability of systems, particularly in the face of uncertainties or perturbations. Current research emphasizes developing methods to evaluate and guarantee stability across diverse system types, including learning models (like neural networks), control systems, and even physical structures, employing techniques such as distributional perturbation analysis, mixed-integer programming, and supermartingale-based approaches. These advancements are crucial for deploying reliable AI systems, designing robust control strategies, and improving the safety and predictability of complex engineered systems.

Papers