Descent Property

Descent property, in optimization and machine learning, refers to algorithms iteratively improving a solution by consistently decreasing an objective function. Current research focuses on extending descent-based methods to non-convex problems and analyzing their convergence properties, particularly for stochastic and distributed optimization algorithms like stochastic gradient descent and federated averaging, as well as exploring alternative approaches like symbolic descent for efficient model pruning in large transformer networks. This work is significant because it improves the efficiency and robustness of optimization algorithms for various applications, including neural network training and graph clustering, leading to better performance and scalability in machine learning models.

Papers