Counterfactual Contrastive
Counterfactual contrastive learning aims to improve the robustness and generalization of machine learning models by generating synthetic data that represents realistic variations within a dataset. Current research focuses on applying this technique to enhance contrastive learning methods, often using causal image synthesis or large language models to create "counterfactual" positive pairs for training, thereby improving performance on downstream tasks, especially in scenarios with limited data or significant domain shifts. This approach shows promise in various applications, including medical image analysis and natural language processing, by mitigating biases and improving model performance across different subgroups and datasets.
Papers
September 16, 2024
June 3, 2024
March 14, 2024
February 20, 2024