Paper ID: 2202.10670

From Optimization Dynamics to Generalization Bounds via {\L}ojasiewicz Gradient Inequality

Fusheng Liu, Haizhao Yang, Soufiane Hayou, Qianxiao Li

Optimization and generalization are two essential aspects of statistical machine learning. In this paper, we propose a framework to connect optimization with generalization by analyzing the generalization error based on the optimization trajectory under the gradient flow algorithm. The key ingredient of this framework is the Uniform-LGI, a property that is generally satisfied when training machine learning models. Leveraging the Uniform-LGI, we first derive convergence rates for gradient flow algorithm, then we give generalization bounds for a large class of machine learning models. We further apply our framework to three distinct machine learning models: linear regression, kernel regression, and two-layer neural networks. Through our approach, we obtain generalization estimates that match or extend previous results.

Submitted: Feb 22, 2022