Contrastive Example
Contrastive learning is a machine learning technique that improves model performance by learning representations that maximize the similarity between similar data points (positive pairs) while minimizing similarity between dissimilar points (negative pairs). Current research focuses on applying contrastive learning to diverse tasks, including image and text processing, anomaly detection, and multimodal learning, often leveraging architectures like CLIP and SimCLR, or integrating contrastive methods into existing models. This approach is proving valuable for improving the robustness, efficiency, and explainability of various AI systems across numerous applications, from image super-resolution to fact verification and cross-lingual natural language processing.
Papers
Contrastive News and Social Media Linking using BERT for Articles and Tweets across Dual Platforms
Jan Piotrowski, Marek Wachnicki, Mateusz Perlik, Jakub Podolak, Grzegorz Rucki, Michał Brzozowski, Paweł Olejnik, Julian Kozłowski, Tomasz Nocoń, Jakub Kozieł, Stanisław Giziński, Piotr Sankowski
CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image Diffusion Models
Tuna Han Salih Meral, Enis Simsar, Federico Tombari, Pinar Yanardag
Contrastive Learning-based Sentence Encoders Implicitly Weight Informative Words
Hiroto Kurita, Goro Kobayashi, Sho Yokoi, Kentaro Inui
CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts For Aspect Sentiment Triplet Extraction
Rajdeep Mukherjee, Nithish Kannen, Saurabh Kumar Pandey, Pawan Goyal