Initialization Bias
Initialization bias, the impact of initial parameter settings on the training and performance of machine learning models, is a critical area of research across various domains. Current investigations focus on understanding how initialization strategies influence learning dynamics, particularly in deep neural networks and vision transformers, and how to mitigate negative effects like sensitivity to hyperparameters and vulnerability to data reconstruction attacks in federated learning. Addressing initialization bias is crucial for improving model robustness, efficiency, and trustworthiness, with implications for diverse applications ranging from anomaly detection to natural language processing and meta-learning.
Papers
September 22, 2024
August 13, 2024
August 2, 2024
June 26, 2024
June 10, 2024
May 22, 2024
April 19, 2024
April 1, 2024
March 11, 2024
January 23, 2024
December 19, 2023
October 10, 2023
August 1, 2023
July 13, 2023
June 26, 2023
December 13, 2021