Contrastive Transformer
Contrastive Transformer models leverage the power of transformer architectures and contrastive learning to improve various tasks by learning robust and discriminative representations. Current research focuses on applying this approach to diverse domains, including image processing (e.g., radiance field reconstruction, object detection, and change captioning), natural language processing (e.g., text-based person search and biomedical information retrieval), and molecular property prediction. This methodology addresses challenges like data scarcity, class imbalance, and cross-modal inconsistencies, leading to improved performance in various applications and advancing the state-of-the-art in several fields.
Papers
October 30, 2024
March 25, 2024
January 24, 2024
November 15, 2023
October 11, 2023
July 2, 2023
April 28, 2023
March 26, 2023
March 6, 2023