Backward Propagation
Backward propagation (BP), a cornerstone of training artificial neural networks, aims to efficiently compute gradients for updating model parameters. Current research focuses on improving BP's efficiency and applicability across diverse architectures, including transformers, spiking neural networks, and optical neural networks, exploring alternatives like forward-only methods and modifications to reduce computational cost and memory requirements. These advancements are crucial for training increasingly complex models, enabling progress in areas like large language models, neuromorphic computing, and efficient solutions to partial differential equations.
Papers
December 2, 2022
November 9, 2022
October 18, 2022
September 30, 2022
September 20, 2022
September 19, 2022
August 29, 2022
August 5, 2022
July 10, 2022
July 7, 2022
June 3, 2022
April 4, 2022
March 26, 2022
March 18, 2022
January 15, 2022
December 23, 2021
December 20, 2021
December 5, 2021
November 22, 2021