Paper ID: 2206.13280
Expressive power of binary and ternary neural networks
Aleksandr Beknazaryan
We show that deep sparse ReLU networks with ternary weights and deep ReLU networks with binary weights can approximate $\beta$-H\"older functions on $[0,1]^d$. Also, for any interval $[a,b)\subset\mathbb{R}$, continuous functions on $[0,1]^d$ can be approximated by networks of depth $2$ with binary activation function $\mathds{1}_{[a,b)}$.
Submitted: Jun 27, 2022