Non Convex Objective
Non-convex objective functions pose significant challenges in optimization problems across various machine learning applications, hindering the development of efficient and provably convergent algorithms. Current research focuses on developing and analyzing novel first-order and second-order methods, including variants of gradient descent, Adam, AdaGrad, and proximal gradient methods, to address the complexities of non-convex landscapes in settings such as federated learning and deep neural networks. These advancements are crucial for improving the performance and theoretical understanding of many machine learning models, impacting areas like data privacy (through certified unlearning), distributed optimization, and robust decision-making.