Paper ID: 2208.08138

Shallow neural network representation of polynomials

Aleksandr Beknazaryan

We show that $d$-variate polynomials of degree $R$ can be represented on $[0,1]^d$ as shallow neural networks of width $2(R+d)^d$. Also, by SNN representation of localized Taylor polynomials of univariate $C^\beta$-smooth functions, we derive for shallow networks the minimax optimal rate of convergence, up to a logarithmic factor, to unknown univariate regression function.

Submitted: Aug 17, 2022