Nonsmooth Objective

Nonsmooth objective optimization focuses on solving optimization problems where the objective function lacks continuous derivatives, a common challenge in machine learning and other fields. Current research emphasizes developing efficient algorithms, such as variants of stochastic gradient descent, adaptive gradient methods (like Adagrad), and primal-dual methods, to handle these nonsmooth functions in various settings, including decentralized and Riemannian optimization. These advancements are crucial for tackling complex problems in areas like deep learning, optimal control, and signal processing, where nonsmoothness arises naturally from model architectures or data characteristics. The development of robust and efficient algorithms for nonsmooth optimization is driving progress in many application domains.

Papers