Trained Source Model

Trained source models are leveraged in domain adaptation to improve performance on target datasets without requiring access to the source data, addressing privacy and efficiency concerns. Current research focuses on techniques like contrastive learning, teacher-student frameworks, and uncertainty-based methods to refine pseudo-labeling and mitigate catastrophic forgetting, often within architectures such as YOLO for object detection or LLMs for text style transfer. These advancements are significant for various applications, including medical imaging, object detection in dynamic environments, and text style transfer, by enabling efficient and privacy-preserving model adaptation to new domains. The development of robust and generalizable source-free adaptation methods remains a key focus.

Papers