Robust Initialization
Robust initialization methods aim to improve the training efficiency and stability of various neural network architectures by carefully selecting initial parameter values. Current research focuses on developing initialization strategies tailored to specific models, including ResNets, recurrent neural networks (RNNs), and those used in image processing (e.g., kernel regression methods), often leveraging techniques like segmentation, pre-training, and meta-learning to achieve robustness. These advancements lead to faster convergence, improved generalization, and reduced sensitivity to hyperparameter choices, impacting diverse applications from image processing and autonomous driving to federated learning and scientific computing.
Papers
October 25, 2024
September 16, 2024
March 14, 2024
March 12, 2024
February 3, 2024
November 27, 2023
November 7, 2023
August 23, 2023
June 7, 2023
July 27, 2022
June 26, 2022
June 16, 2022
May 23, 2022
February 2, 2022
December 23, 2021