Back Propagation
Backpropagation is a fundamental algorithm for training artificial neural networks, primarily used to calculate gradients for updating network weights to minimize error. Current research focuses on improving backpropagation's efficiency and biological plausibility, exploring alternatives like forward-forward algorithms and methods that avoid the need for storing activations or full gradient calculations, often within the context of specific architectures such as transformers, spiking neural networks, and physics-informed neural networks. These efforts aim to reduce computational costs, memory requirements, and energy consumption, ultimately impacting the scalability and applicability of deep learning across various domains, including resource-constrained devices and large-scale models.
Papers
Training a multilayer dynamical spintronic network with standard machine learning tools to perform time series classification
Erwan Plouet, Dédalo Sanz-Hernández, Aymeric Vecchiola, Julie Grollier, Frank Mizrahi
4D-Var using Hessian approximation and backpropagation applied to automatically-differentiable numerical and machine learning models
Kylen Solvik, Stephen G. Penny, Stephan Hoyer
Learning by the F-adjoint
Ahmed Boughammoura
LPGD: A General Framework for Backpropagation through Embedded Optimization Layers
Anselm Paulus, Georg Martius, Vít Musil
Momentum Auxiliary Network for Supervised Local Learning
Junhao Su, Changpeng Cai, Feiyu Zhu, Chenghao He, Xiaojie Xu, Dongzhi Guan, Chenyang Si