Paper ID: 2111.05478
SGD Through the Lens of Kolmogorov Complexity
Gregory Schwartzman
We prove that stochastic gradient descent (SGD) finds a solution that achieves $(1-\epsilon)$ classification accuracy on the entire dataset. We do so under two main assumptions: (1. Local progress) The model accuracy improves on average over batches. (2. Models compute simple functions) The function computed by the model is simple (has low Kolmogorov complexity). It is sufficient that these assumptions hold only for a tiny fraction of the epochs. Intuitively, the above means that intermittent local progress of SGD implies global progress. Assumption 2 trivially holds for underparameterized models, hence, our work gives the first convergence guarantee for general, underparameterized models. Furthermore, this is the first result which is completely model agnostic - we do not require the model to have any specific architecture or activation function, it may not even be a neural network. Our analysis makes use of the entropy compression method, which was first introduced by Moser and Tardos in the context of the Lov\'asz local lemma.
Submitted: Nov 10, 2021