Frozen Feature

Frozen feature methods leverage pre-trained feature extractors, whose weights remain fixed during subsequent training, to improve efficiency and performance in various machine learning tasks. Current research focuses on optimizing the use of frozen features in scenarios like few-shot learning, continual learning, and federated learning, often employing techniques such as data augmentation in the feature space, adaptive distance metrics, and novel classifier architectures (e.g., individual classifiers or prototypical networks). This approach offers significant advantages in terms of computational cost and memory efficiency, making it particularly valuable for resource-constrained environments and large-scale applications, while also addressing challenges like catastrophic forgetting in continual learning and client drift in federated learning.

Papers