Contrastive Model
Contrastive learning is a self-supervised machine learning approach that learns representations by comparing and contrasting data points, aiming to create embeddings that cluster similar data and separate dissimilar data. Current research focuses on improving contrastive models across various modalities (vision, language, audio) through techniques like prompt tuning, in-context learning, and adversarial methods, often leveraging pretrained models for efficiency. These advancements are impacting diverse fields, enhancing performance in tasks such as zero-shot classification, retrieval, and improving the safety and robustness of large language models and federated learning systems.
Papers
November 12, 2024
October 31, 2024
August 17, 2024
July 22, 2024
June 24, 2024
June 19, 2024
March 27, 2024
March 13, 2024
March 8, 2024
March 5, 2024
October 23, 2023
October 21, 2023
June 6, 2023
May 26, 2023
March 19, 2023
February 5, 2023
December 14, 2022
November 17, 2022
October 10, 2022