Paper ID: 2406.18800
Infinite Width Models That Work: Why Feature Learning Doesn't Matter as Much as You Think
Luke Sernau
Common infinite-width architectures such as Neural Tangent Kernels (NTKs) have historically shown weak performance compared to finite models. This is usually attributed to the absence of feature learning. We show that this explanation is insufficient. Specifically, we show that infinite width NTKs obviate the need for feature learning. They can learn identical behavior by selecting relevant subfeatures from their (infinite) frozen feature vector. Furthermore, we show experimentally that NTKs under-perform traditional finite models even when feature learning is artificially disabled. Instead, we show that weak performance is at least partly due to the fact that existing constructions depend on weak optimizers like SGD. We provide a new infinite width limit based on ADAM-like learning dynamics and demonstrate empirically that the resulting models erase this performance gap.
Submitted: Jun 27, 2024