Gradient Backpropagation
Gradient backpropagation is a fundamental algorithm for training artificial neural networks, aiming to efficiently adjust network parameters to minimize prediction errors. Current research focuses on improving its stability and efficiency, exploring alternatives like forward-forward algorithms and investigating methods to mitigate issues such as vanishing/exploding gradients, particularly in recurrent networks and specialized architectures like binary neural networks and spiking neural networks. These advancements are crucial for enhancing the performance, interpretability, and applicability of neural networks across diverse fields, from soft sensors and image classification to physics simulations and edge computing.
Papers
October 11, 2024
September 23, 2024
September 11, 2024
August 17, 2024
August 8, 2024
May 3, 2024
March 7, 2024
November 6, 2023
July 27, 2023
July 20, 2023
June 22, 2023
June 12, 2023
May 10, 2023
March 28, 2023
March 16, 2023
February 21, 2023
October 19, 2022
October 11, 2022
June 19, 2022