Paper ID: 2207.13891
Quantifying Safety of Learning-based Self-Driving Control Using Almost-Barrier Functions
Zhizhen Qin, Tsui-Wei Weng, Sicun Gao
Path-tracking control of self-driving vehicles can benefit from deep learning for tackling longstanding challenges such as nonlinearity and uncertainty. However, deep neural controllers lack safety guarantees, restricting their practical use. We propose a new approach of learning almost-barrier functions, which approximately characterizes the forward invariant set for the system under neural controllers, to quantitatively analyze the safety of deep neural controllers for path-tracking. We design sampling-based learning procedures for constructing candidate neural barrier functions, and certification procedures that utilize robustness analysis for neural networks to identify regions where the barrier conditions are fully satisfied. We use an adversarial training loop between learning and certification to optimize the almost-barrier functions. The learned barrier can also be used to construct online safety monitors through reachability analysis. We demonstrate effectiveness of our methods in quantifying safety of neural controllers in various simulation environments, ranging from simple kinematic models to the TORCS simulator with high-fidelity vehicle dynamics simulation.
Submitted: Jul 28, 2022