Universal Approximation
Universal approximation theory explores the ability of neural networks to approximate any continuous function to arbitrary accuracy. Current research focuses on refining approximation bounds for various network architectures (including feedforward, recurrent, and transformer networks), investigating the impact of parameter constraints (e.g., bounded weights, quantization), and extending the theory to encompass broader input spaces (e.g., topological vector spaces, non-metric spaces) and operator learning. These advancements provide a stronger theoretical foundation for deep learning, informing model design, optimization strategies, and ultimately improving the reliability and efficiency of applications across diverse fields.
Papers
On the Universal Approximation Property of Deep Fully Convolutional Neural Networks
Ting Lin, Zuowei Shen, Qianxiao Li
LU decomposition and Toeplitz decomposition of a neural network
Yucong Liu, Simiao Jiao, Lek-Heng Lim
Minimal Width for Universal Property of Deep RNN
Chang hoon Song, Geonho Hwang, Jun ho Lee, Myungjoo Kang