Test Time Adaptation
Test-time adaptation (TTA) focuses on improving the performance of pre-trained machine learning models on unseen data during inference, without requiring additional labeled training data. Current research emphasizes developing robust TTA methods across diverse tasks, including image classification, segmentation, object detection, and speech recognition, often employing techniques like batch normalization updates, pseudo-labeling, and adversarial training within various model architectures (e.g., transformers, neural implicit representations). The ability to adapt models efficiently at test time is crucial for deploying machine learning systems in real-world scenarios characterized by domain shifts and data variability, impacting fields ranging from medical imaging to robotics.
Papers
Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation
Sidra Aleem, Fangyijie Wang, Mayug Maniparambil, Eric Arazo, Julia Dietlmeier, Guenole Silvestre, Kathleen Curran, Noel E. O'Connor, Suzanne Little
Unified Entropy Optimization for Open-Set Test-Time Adaptation
Zhengqing Gao, Xu-Yao Zhang, Cheng-Lin Liu
Backpropagation-free Network for 3D Test-time Adaptation
Yanshuo Wang, Ali Cheraghian, Zeeshan Hayder, Jie Hong, Sameera Ramasinghe, Shafin Rahman, David Ahmedt-Aristizabal, Xuesong Li, Lars Petersson, Mehrtash Harandi
Efficient Test-Time Adaptation of Vision-Language Models
Adilbek Karmanov, Dayan Guan, Shijian Lu, Abdulmotaleb El Saddik, Eric Xing