Test Time Adaptation
Test-time adaptation (TTA) focuses on improving the performance of pre-trained machine learning models on unseen data during inference, without requiring additional labeled training data. Current research emphasizes developing robust TTA methods across diverse tasks, including image classification, segmentation, object detection, and speech recognition, often employing techniques like batch normalization updates, pseudo-labeling, and adversarial training within various model architectures (e.g., transformers, neural implicit representations). The ability to adapt models efficiently at test time is crucial for deploying machine learning systems in real-world scenarios characterized by domain shifts and data variability, impacting fields ranging from medical imaging to robotics.
Papers
Exploring Structured Semantic Priors Underlying Diffusion Score for Test-time Adaptation
Mingjia Li, Shuang Li, Tongrui Su, Longhui Yuan, Jian Liang, Wei Li
SPARNet: Continual Test-Time Adaptation via Sample Partitioning Strategy and Anti-Forgetting Regularization
Xinru Meng, Han Sun, Jiamei Liu, Ningzhong Liu, Huiyu Zhou
Augmented Contrastive Clustering with Uncertainty-Aware Prototyping for Time Series Test Time Adaptation
Peiliang Gong, Mohamed Ragab, Min Wu, Zhenghua Chen, Yongyi Su, Xiaoli Li, Daoqiang Zhang