Domain Adaptive
Domain adaptation in machine learning focuses on improving the performance of models trained on one dataset (source domain) when applied to a different, but related, dataset (target domain). Current research emphasizes techniques like adversarial training, transfer learning, and the use of pre-trained large language models (LLMs) and foundation models, often incorporating specialized architectures such as transformers and graph neural networks to bridge domain gaps. This field is crucial for addressing the limitations of models trained on limited or biased data, enabling broader applicability across diverse real-world applications, including medical imaging, natural language processing, and remote sensing.
Papers
Geospatial foundation models for image analysis: evaluating and enhancing NASA-IBM Prithvi's domain adaptability
Chia-Yu Hsu, Wenwen Li, Sizhe Wang
From Prediction to Application: Language Model-based Code Knowledge Tracing with Domain Adaptive Pre-Training and Automatic Feedback System with Pedagogical Prompting for Comprehensive Programming Education
Unggi Lee, Jiyeong Bae, Yeonji Jung, Minji Kang, Gyuri Byun, Yeonseo Lee, Dohee Kim, Sookbun Lee, Jaekwon Park, Taekyung Ahn, Gunho Lee, Hyeoncheol Kim