Non Convex Optimization Problem
Non-convex optimization tackles the challenge of finding optimal solutions in complex, multi-modal landscapes where the objective function lacks the desirable properties of convexity. Current research focuses on developing efficient algorithms, including variants of gradient descent (e.g., stochastic gradient descent, accelerated methods), proximal methods, and techniques leveraging generative neural networks or convex relaxations, to escape local minima and approach global optima. These advancements are crucial for addressing numerous problems in machine learning, control systems, and other fields where non-convexity is inherent, improving the performance and reliability of applications ranging from neural network training to resource allocation. The development of optimality certificates and improved theoretical understanding of convergence properties are also significant areas of ongoing investigation.
Papers
Learning Proximal Operators to Discover Multiple Optima
Lingxiao Li, Noam Aigerman, Vladimir G. Kim, Jiajin Li, Kristjan Greenewald, Mikhail Yurochkin, Justin Solomon
Simplifying deflation for non-convex optimization with applications in Bayesian inference and topology optimization
Mohamed Tarek, Yijiang Huang