Back Propagation
Backpropagation is a fundamental algorithm for training artificial neural networks, primarily used to calculate gradients for updating network weights to minimize error. Current research focuses on improving backpropagation's efficiency and biological plausibility, exploring alternatives like forward-forward algorithms and methods that avoid the need for storing activations or full gradient calculations, often within the context of specific architectures such as transformers, spiking neural networks, and physics-informed neural networks. These efforts aim to reduce computational costs, memory requirements, and energy consumption, ultimately impacting the scalability and applicability of deep learning across various domains, including resource-constrained devices and large-scale models.
Papers
Training neural networks with end-to-end optical backpropagation
James Spall, Xianxin Guo, A. I. Lvovsky
A Novel Method for improving accuracy in neural network by reinstating traditional back propagation technique
Gokulprasath R
An In-Depth Analysis of Discretization Methods for Communication Learning using Backpropagation with Multi-Agent Reinforcement Learning
Astrid Vanneste, Simon Vanneste, Kevin Mets, Tom De Schepper, Siegfried Mercelis, Peter Hellinckx
Unlocking the Potential of Similarity Matching: Scalability, Supervision and Pre-training
Yanis Bahroun, Shagesh Sridharan, Atithi Acharya, Dmitri B. Chklovskii, Anirvan M. Sengupta
Detection and Segmentation of Cosmic Objects Based on Adaptive Thresholding and Back Propagation Neural Network
Samia Sultana, Shyla Afroge