Nonconvexity Setting
Nonconvexity in optimization poses significant challenges across diverse fields, from machine learning to computer vision, as many real-world problems lack the convenient properties of convexity. Current research focuses on understanding the impact of nonconvexity on algorithm performance, particularly investigating the convergence properties of first-order methods like stochastic gradient descent and exploring techniques like smoothing and cohypomonotonicity to mitigate its effects. This work is crucial for developing robust and efficient algorithms for solving complex problems in various applications, including generative modeling, risk-constrained learning, and process optimization, where nonconvexity is inherent. The ultimate goal is to develop a deeper theoretical understanding of nonconvex optimization landscapes and design algorithms that reliably find global or near-global optima.
Papers
Nonnegative Low-rank Matrix Recovery Can Have Spurious Local Minima
Richard Y. ZhangUniversity of Illinois at Urbana-ChampaignWasserstein Convergence of Score-based Generative Models under Semiconvexity and Discontinuous Gradients
Stefano Bruno, Sotirios SabanisUniversity of Edinburgh●Ulsan National Institute of Science and Technology●National Technical University of Athens●Archimedes/Athena Research...+1