Domain Divergence
Domain divergence, the difference in data distributions between different domains (e.g., datasets or environments), hinders the generalization ability of machine learning models. Current research focuses on mitigating this divergence through various techniques, including adversarial training, contrastive learning, and the use of large vision-language models like CLIP to measure and reduce domain discrepancies. These methods aim to learn domain-invariant features or calibrate predictions to improve model performance across diverse datasets, impacting fields like image recognition, natural language processing, and medical image analysis by enabling more robust and generalizable AI systems.
Papers
September 23, 2024
July 1, 2024
March 1, 2024
February 16, 2024
September 14, 2023
December 8, 2022
November 16, 2022
October 22, 2022
August 28, 2022
February 27, 2022
February 22, 2022
January 30, 2022
January 25, 2022
January 9, 2022