Paper ID: 2111.03782
Confidence Composition for Monitors of Verification Assumptions
Ivan Ruchkin, Matthew Cleaveland, Radoslav Ivanov, Pengyuan Lu, Taylor Carpenter, Oleg Sokolsky, Insup Lee
Closed-loop verification of cyber-physical systems with neural network controllers offers strong safety guarantees under certain assumptions. It is, however, difficult to determine whether these guarantees apply at run time because verification assumptions may be violated. To predict safety violations in a verified system, we propose a three-step confidence composition (CoCo) framework for monitoring verification assumptions. First, we represent the sufficient condition for verified safety with a propositional logical formula over assumptions. Second, we build calibrated confidence monitors that evaluate the probability that each assumption holds. Third, we obtain the confidence in the verification guarantees by composing the assumption monitors using a composition function suitable for the logical formula. Our CoCo framework provides theoretical bounds on the calibration and conservatism of compositional monitors. Two case studies show that compositional monitors are calibrated better than their constituents and successfully predict safety violations.
Submitted: Nov 3, 2021