Bound Propagation

Bound propagation is a technique used to determine the range of possible outputs of a neural network given a set of input constraints, crucial for verifying network properties like robustness and safety. Current research focuses on improving the efficiency and accuracy of bound propagation methods, particularly for complex architectures like transformers and networks with non-linear activation functions, employing techniques such as branch-and-bound, linear programming, and Bernstein polynomial approximations. These advancements are vital for enhancing the reliability and trustworthiness of deep learning models in safety-critical applications, enabling formal verification and ultimately increasing confidence in their deployment.

Papers