Universal Approximation
Universal approximation theory explores the ability of neural networks to approximate any continuous function to arbitrary accuracy. Current research focuses on refining approximation bounds for various network architectures (including feedforward, recurrent, and transformer networks), investigating the impact of parameter constraints (e.g., bounded weights, quantization), and extending the theory to encompass broader input spaces (e.g., topological vector spaces, non-metric spaces) and operator learning. These advancements provide a stronger theoretical foundation for deep learning, informing model design, optimization strategies, and ultimately improving the reliability and efficiency of applications across diverse fields.
Papers
November 1, 2022
October 2, 2022
September 23, 2022
September 6, 2022
August 18, 2022
August 10, 2022
May 26, 2022
April 24, 2022
February 11, 2022
January 31, 2022
December 30, 2021
December 16, 2021