Weight Freezing

Weight freezing, a technique involving selectively preventing certain model weights from updating during training, aims to improve model performance, efficiency, and robustness. Current research focuses on applying weight freezing within various neural network architectures, including large language models and convolutional neural networks, often in conjunction with techniques like pruning, quantization, and curriculum learning to achieve parameter-efficient fine-tuning or mitigate catastrophic forgetting in continual learning scenarios. This approach holds significance for addressing challenges like overfitting, backdoor vulnerabilities, and the computational cost of training and deploying large models, impacting both the efficiency of machine learning algorithms and their practical applicability in resource-constrained environments.

Papers