Approximation Rate

Approximation rate research focuses on determining how efficiently different models, such as deep neural networks (including ReLU, convolutional, and transformer architectures) and radial basis functions, can approximate complex functions. Current studies analyze approximation rates across various function spaces (e.g., Sobolev, Besov) using different error metrics and explore the impact of model parameters (depth, width, number of experts) on accuracy. These investigations are crucial for understanding the capabilities and limitations of different machine learning models, informing the design of more efficient algorithms, and improving the accuracy and reliability of applications in scientific computing, signal processing, and other fields.

Papers