Contrastive Transformer

Contrastive Transformer models leverage the power of transformer architectures and contrastive learning to improve various tasks by learning robust and discriminative representations. Current research focuses on applying this approach to diverse domains, including image processing (e.g., radiance field reconstruction, object detection, and change captioning), natural language processing (e.g., text-based person search and biomedical information retrieval), and molecular property prediction. This methodology addresses challenges like data scarcity, class imbalance, and cross-modal inconsistencies, leading to improved performance in various applications and advancing the state-of-the-art in several fields.

Papers