Non Smooth

Non-smooth optimization addresses the challenge of minimizing functions with discontinuities or non-differentiable points, a common feature in many machine learning and signal processing problems. Current research focuses on developing efficient algorithms, such as stochastic gradient methods, proximal methods, and adaptive learning rate techniques (like Adam and RMSProp), tailored to handle various types of non-smoothness, including those arising in weakly convex and composite functions. These advancements improve the speed and accuracy of solving complex optimization problems, impacting diverse fields from training neural networks and solving inverse problems to federated learning and robust statistical inference. The development of tight convergence guarantees and lower complexity bounds for these algorithms is a key area of ongoing investigation.

Papers