Paper ID: 2201.00945

An unfeasability view of neural network learning

Joos Heintz, Hvara Ocar, Luis Miguel Pardo, Andres Rojas Paredes, Enrique Carlos Segura

We define the notion of a continuously differentiable perfect learning algorithm for multilayer neural network architectures and show that such algorithms don't exist provided that the length of the data set exceeds the number of involved parameters and the activation functions are logistic, tanh or sin.

Submitted: Jan 4, 2022