Paper ID: 2304.13221
Nonlocality and Nonlinearity Implies Universality in Operator Learning
Samuel Lanthaler, Zongyi Li, Andrew M. Stuart
Neural operator architectures approximate operators between infinite-dimensional Banach spaces of functions. They are gaining increased attention in computational science and engineering, due to their potential both to accelerate traditional numerical methods and to enable data-driven discovery. As the field is in its infancy basic questions about minimal requirements for universal approximation remain open. It is clear that any general approximation of operators between spaces of functions must be both nonlocal and nonlinear. In this paper we describe how these two attributes may be combined in a simple way to deduce universal approximation. In so doing we unify the analysis of a wide range of neural operator architectures and open up consideration of new ones. A popular variant of neural operators is the Fourier neural operator (FNO). Previous analysis proving universal operator approximation theorems for FNOs resorts to use of an unbounded number of Fourier modes, relying on intuition from traditional analysis of spectral methods. The present work challenges this point of view: (i) the work reduces FNO to its core essence, resulting in a minimal architecture termed the ``averaging neural operator'' (ANO); and (ii) analysis of the ANO shows that even this minimal ANO architecture benefits from universal approximation. This result is obtained based on only a spatial average as its only nonlocal ingredient (corresponding to retaining only a \emph{single} Fourier mode in the special case of the FNO). The analysis paves the way for a more systematic exploration of nonlocality, both through the development of new operator learning architectures and the analysis of existing and new architectures. Numerical results are presented which give insight into complexity issues related to the roles of channel width (embedding dimension) and number of Fourier modes.
Submitted: Apr 26, 2023