Self Supervised Paradigm
Self-supervised learning aims to train machine learning models on vast amounts of unlabeled data by leveraging inherent data structures and relationships, reducing reliance on expensive human annotation. Current research focuses on improving the robustness and efficiency of self-supervised methods, exploring techniques like contrastive learning, masked autoencoders, and the integration of foundation models for tasks such as depth estimation, pose estimation, and active learning initialization. These advancements are significant because they enable the development of more generalizable and data-efficient models across various computer vision applications, impacting fields ranging from medical image analysis to autonomous driving.
Papers
September 23, 2024
April 3, 2024
February 4, 2024
November 16, 2023
November 6, 2023
August 21, 2023
July 31, 2023
May 19, 2023
February 1, 2023
March 14, 2022
December 15, 2021