Backward Propagation
Backward propagation (BP), a cornerstone of training artificial neural networks, aims to efficiently compute gradients for updating model parameters. Current research focuses on improving BP's efficiency and applicability across diverse architectures, including transformers, spiking neural networks, and optical neural networks, exploring alternatives like forward-only methods and modifications to reduce computational cost and memory requirements. These advancements are crucial for training increasingly complex models, enabling progress in areas like large language models, neuromorphic computing, and efficient solutions to partial differential equations.
Papers
October 8, 2024
September 13, 2024
August 15, 2024
July 17, 2024
June 8, 2024
May 28, 2024
May 27, 2024
May 2, 2024
February 27, 2024
February 15, 2024
October 16, 2023
October 10, 2023
August 18, 2023
June 22, 2023
May 26, 2023
May 24, 2023
May 15, 2023
May 7, 2023
April 4, 2023