One Shot Segmentation
One-shot segmentation aims to segment images into different classes using only a single labeled example image per class, drastically reducing the annotation burden compared to traditional supervised learning. Current research focuses on leveraging powerful foundation models like Segment Anything Model (SAM), incorporating self-supervised learning techniques to learn robust representations from unlabeled data, and developing novel strategies for prompt generation and feature matching to improve segmentation accuracy. This field is significant because it promises to accelerate the development of segmentation models across various domains, particularly in medical imaging and robotics, where labeled data is scarce and expensive to acquire.
Papers
October 9, 2024
August 7, 2024
July 29, 2024
July 9, 2024
May 19, 2024
May 6, 2024
April 23, 2024
April 18, 2024
February 14, 2024
November 24, 2023
October 28, 2023
September 2, 2023
June 15, 2023
May 22, 2023
April 17, 2023
March 13, 2023
March 10, 2023
November 26, 2022