Test Time Training
Test-time training (TTT) is a machine learning paradigm that adapts pre-trained models to unseen data distributions during inference, improving robustness to domain shifts. Current research focuses on developing effective self-supervised learning objectives for TTT, often integrated into various architectures including convolutional neural networks (CNNs), recurrent neural networks (RNNs), graph neural networks (GNNs), and transformers, and exploring techniques like contrastive learning, knowledge distillation, and normalizing flows to enhance adaptation. This approach holds significant promise for improving the reliability and generalizability of machine learning models in real-world applications where training and test data distributions inevitably differ, impacting fields like medical image analysis, recommendation systems, and time series forecasting.
Papers
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
Yu Sun, Xinhao Li, Karan Dalal, Jiarui Xu, Arjun Vikram, Genghan Zhang, Yann Dubois, Xinlei Chen, Xiaolong Wang, Sanmi Koyejo, Tatsunori Hashimoto, Carlos Guestrin
Graph-Guided Test-Time Adaptation for Glaucoma Diagnosis using Fundus Photography
Qian Zeng, Le Zhang, Yipeng Liu, Ce Zhu, Fan Zhang