Back Propagation
Backpropagation is a fundamental algorithm for training artificial neural networks, primarily used to calculate gradients for updating network weights to minimize error. Current research focuses on improving backpropagation's efficiency and biological plausibility, exploring alternatives like forward-forward algorithms and methods that avoid the need for storing activations or full gradient calculations, often within the context of specific architectures such as transformers, spiking neural networks, and physics-informed neural networks. These efforts aim to reduce computational costs, memory requirements, and energy consumption, ultimately impacting the scalability and applicability of deep learning across various domains, including resource-constrained devices and large-scale models.
Papers
Towards Scaling Difference Target Propagation by Learning Backprop Targets
Maxence Ernoult, Fabrice Normandin, Abhinav Moudgil, Sean Spinney, Eugene Belilovsky, Irina Rish, Blake Richards, Yoshua Bengio
DNS: Determinantal Point Process Based Neural Network Sampler for Ensemble Reinforcement Learning
Hassam Sheikh, Kizza Frisbee, Mariano Phielipp
Memory-Efficient Backpropagation through Large Linear Layers
Daniel Bershatsky, Aleksandr Mikhalev, Alexandr Katrutsa, Julia Gusak, Daniil Merkulov, Ivan Oseledets
Learning on Arbitrary Graph Topologies via Predictive Coding
Tommaso Salvatori, Luca Pinchetti, Beren Millidge, Yuhang Song, Tianyi Bao, Rafal Bogacz, Thomas Lukasiewicz