Weakly Supervised Domain Adaptation
Weakly supervised domain adaptation (WDA) tackles the challenge of transferring knowledge from a source domain with noisy or incomplete labels to a target domain lacking annotations. Current research focuses on developing robust algorithms that leverage limited target-domain information, often employing self-training techniques, iterative refinement of pseudo-labels, and multi-task learning frameworks to improve model generalization. This approach is crucial for addressing real-world scenarios where fully labeled data is scarce or expensive to obtain, impacting diverse fields like medical image analysis and 3D object detection by enabling more efficient and effective machine learning model training.
Papers
June 20, 2024
October 5, 2023
March 13, 2023
October 24, 2022