Continual Test Time Adaptation
Continual Test-Time Adaptation (CTTA) focuses on adapting pre-trained models to continuously changing, unlabeled data streams during deployment, aiming to improve model robustness and adaptability in real-world scenarios. Current research emphasizes efficient adaptation strategies, often employing techniques like self-training with pseudo-labels, meta-learning, and selective parameter updates within architectures such as Mean Teacher models or by using visual prompts. This field is crucial for deploying machine learning models in dynamic environments, particularly in applications like autonomous driving and medical image analysis, where data distributions shift over time.
Papers
Each Test Image Deserves A Specific Prompt: Continual Test-Time Adaptation for 2D Medical Image Segmentation
Ziyang Chen, Yongsheng Pan, Yiwen Ye, Mengkang Lu, Yong Xia
Beyond Entropy: Style Transfer Guided Single Image Continual Test-Time Adaptation
Younggeol Cho, Youngrae Kim, Dongman Lee
Persistent Test-time Adaptation in Recurring Testing Scenarios
Trung-Hieu Hoang, Duc Minh Vo, Minh N. Do