Conditional Contrastive Learning
Conditional contrastive learning refines contrastive learning by incorporating conditional information during the training process, aiming to learn more robust and informative representations. Current research focuses on leveraging this conditional information for various purposes, including improving model performance in tasks with noisy labels or limited data, mitigating bias in downstream applications, and enhancing the effectiveness of weakly supervised learning. This approach shows promise in diverse fields, such as medical image analysis, natural language processing, and fair machine learning, by enabling the development of more accurate, equitable, and efficient models.
Papers
August 18, 2023
June 16, 2023
June 13, 2022
May 23, 2022