Back Propagation
Backpropagation is a fundamental algorithm for training artificial neural networks, primarily used to calculate gradients for updating network weights to minimize error. Current research focuses on improving backpropagation's efficiency and biological plausibility, exploring alternatives like forward-forward algorithms and methods that avoid the need for storing activations or full gradient calculations, often within the context of specific architectures such as transformers, spiking neural networks, and physics-informed neural networks. These efforts aim to reduce computational costs, memory requirements, and energy consumption, ultimately impacting the scalability and applicability of deep learning across various domains, including resource-constrained devices and large-scale models.
Papers
InvertibleNetworks.jl: A Julia package for scalable normalizing flows
Rafael Orozco, Philipp Witte, Mathias Louboutin, Ali Siahkoohi, Gabrio Rizzuti, Bas Peters, Felix J. Herrmann
Unlocking Deep Learning: A BP-Free Approach for Parallel Block-Wise Training of Neural Networks
Anzhe Cheng, Zhenkun Wang, Chenzhong Yin, Mingxi Cheng, Heng Ping, Xiongye Xiao, Shahin Nazarian, Paul Bogdan