Paper ID: 2410.19844

A practical, fast method for solving sum-of-squares problems for very large polynomials

Daniel Keren, Margarita Osadchy, Roi Poranne

Sum of squares (SOS) optimization is a powerful technique for solving problems where the positivity of a polynomials must be enforced. The common approach to solve an SOS problem is by relaxation to a Semidefinite Program (SDP). The main advantage of this transormation is that SDP is a convex problem for which efficient solvers are readily available. However, while considerable progress has been made in recent years, the standard approaches for solving SDPs are still known to scale poorly. Our goal is to devise an approach that can handle larger, more complex problems than is currently possible. The challenge indeed lies in how SDPs are commonly solved. State-Of-The-Art approaches rely on the interior point method, which requires the factorization of large matrices. We instead propose an approach inspired by polynomial neural networks, which exhibit excellent performance when optimized using techniques from the deep learning toolbox. In a somewhat counter-intuitive manner, we replace the convex SDP formulation with a non-convex, unconstrained, and \emph{over parameterized} formulation, and solve it using a first order optimization method. It turns out that this approach can handle very large problems, with polynomials having over four million coefficients, well beyond the range of current SDP-based approaches. Furthermore, we highlight theoretical and practical results supporting the experimental success of our approach in avoiding spurious local minima, which makes it amenable to simple and fast solutions based on gradient descent. In all the experiments, our approach had always converged to a correct global minimum, on general (non-sparse) polynomials, with running time only slightly higher than linear in the number of polynomial coefficients, compared to higher than quadratic in the number of coefficients for SDP-based methods.

Submitted: Oct 21, 2024