Semantic Self Supervision
Semantic self-supervision leverages inherent semantic relationships within data to train models without explicit labels, aiming to improve generalization and data efficiency in various tasks. Current research focuses on integrating vision-language models, contrastive learning, and hybrid matching modules to effectively utilize semantic information from diverse sources like text descriptions and automatically generated labels. This approach shows promise in enhancing performance across diverse applications, including image classification, object detection, and zero-shot learning, particularly in scenarios with limited labeled data or a large number of classes.
Papers
October 19, 2024
December 12, 2023
April 6, 2023
March 30, 2023
January 26, 2023
September 30, 2022
August 3, 2022
July 6, 2022
May 27, 2022
February 26, 2022