Paper ID: 2309.05102 • Published Sep 10, 2023

Is Learning in Biological Neural Networks based on Stochastic Gradient Descent? An analysis using stochastic processes

Sören Christensen, Jan Kallsen
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this paper, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs.