Initialization Bias

Initialization bias, the impact of initial parameter settings on the training and performance of machine learning models, is a critical area of research across various domains. Current investigations focus on understanding how initialization strategies influence learning dynamics, particularly in deep neural networks and vision transformers, and how to mitigate negative effects like sensitivity to hyperparameters and vulnerability to data reconstruction attacks in federated learning. Addressing initialization bias is crucial for improving model robustness, efficiency, and trustworthiness, with implications for diverse applications ranging from anomaly detection to natural language processing and meta-learning.

Papers