Contrastive Embeddings
Contrastive embeddings are learned representations that maximize the similarity between semantically similar data points while minimizing similarity between dissimilar ones. Current research focuses on applying this technique to diverse areas, including deepfake detection (using models trained on large datasets of generated images), improving the explainability and source attribution of large language models, and refining existing embeddings for enhanced performance in downstream tasks. This approach is proving valuable across various fields, from improving recommendation systems and sign language recognition to enabling more accurate and efficient analysis of complex data like transient motion in turbid media.
Papers
July 29, 2024
July 6, 2024
April 11, 2024
February 19, 2024
January 28, 2024
August 18, 2023
January 31, 2023
April 4, 2022