Modality Freezing
Modality freezing is a technique used to improve the efficiency and performance of various machine learning models by selectively disabling the training or updating of certain model parameters or layers. Current research focuses on applying this technique to diverse areas, including multi-modal entity alignment, diffusion models for image generation, and vision transformers for self-supervised learning, often employing strategies like progressive layer freezing or selective tensor freezing. This approach offers significant benefits, such as accelerating training, mitigating illegal model adaptations, and enhancing model efficiency without substantial accuracy loss, thereby impacting both computational resource management and the robustness of deployed models.