Self Concordant
Self-concordant functions and their generalizations are crucial for developing efficient optimization algorithms, particularly for large-scale machine learning problems. Current research focuses on leveraging self-concordant properties within proximal Newton and generalized Gauss-Newton methods, often incorporating regularization techniques to improve convergence speed and generalization performance in models like overparameterized neural networks. This work is significant because it enables faster and more robust training of complex models, impacting various applications from machine learning to general convex optimization. The development of efficient algorithms for handling these functions is a key area of ongoing investigation.
Papers
April 23, 2024
September 4, 2023
August 28, 2023
December 4, 2022