Differentiable Convex

Differentiable convex programming integrates convex optimization problems directly into neural networks, enabling end-to-end training of systems requiring both prediction and optimization. Current research focuses on developing efficient algorithms, such as Lagrangian Proximal Gradient Descent and ADMM-based methods, to handle the differentiable optimization layers within various architectures, including those employing Control Barrier Functions and Quadratic Programs. This approach is proving valuable in diverse applications like robotics, control systems, and resource allocation, offering improved performance and safety guarantees compared to traditional two-stage approaches that decouple prediction and optimization. The ability to seamlessly integrate optimization into learning systems is significantly advancing the capabilities of AI in safety-critical domains.

Papers