Video Domain Adaptation
Video domain adaptation focuses on training video analysis models that generalize well across different video datasets, overcoming the limitations of models trained on a single, often limited, source domain. Current research emphasizes unsupervised methods, leveraging techniques like contrastive learning, self-supervised pre-training, and transformer architectures (including Vision Transformers) to align features between source and target domains, often incorporating temporal information effectively. This field is crucial for improving the robustness and real-world applicability of video understanding systems, reducing the reliance on extensive, costly annotation of diverse video data.
Papers
Confidence Attention and Generalization Enhanced Distillation for Continuous Video Domain Adaptation
Xiyu Wang, Yuecong Xu, Jianfei Yang, Bihan Wen, Alex C. Kot
Augmenting and Aligning Snippets for Few-Shot Video Domain Adaptation
Yuecong Xu, Jianfei Yang, Yunjiao Zhou, Zhenghua Chen, Min Wu, Xiaoli Li