Test Time Adaptation
Test-time adaptation (TTA) focuses on improving the performance of pre-trained machine learning models on unseen data during inference, without requiring additional labeled training data. Current research emphasizes developing robust TTA methods across diverse tasks, including image classification, segmentation, object detection, and speech recognition, often employing techniques like batch normalization updates, pseudo-labeling, and adversarial training within various model architectures (e.g., transformers, neural implicit representations). The ability to adapt models efficiently at test time is crucial for deploying machine learning systems in real-world scenarios characterized by domain shifts and data variability, impacting fields ranging from medical imaging to robotics.
Papers
Learning to Adapt to Online Streams with Distribution Shifts
Chenyan Wu, Yimu Pan, Yandong Li, James Z. Wang
Do Machine Learning Models Learn Statistical Rules Inferred from Data?
Aaditya Naik, Yinjun Wu, Mayur Naik, Eric Wong
Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation
Yushun Tang, Ce Zhang, Heng Xu, Shuoshuo Chen, Jie Cheng, Luziwei Leng, Qinghai Guo, Zhihai He