Self Supervised Medical
Self-supervised medical image learning aims to leverage the vast amounts of unannotated medical data to train powerful models for diagnosis and treatment, overcoming limitations imposed by data scarcity and annotation costs. Current research focuses on developing novel architectures, such as Vision Transformers and autoencoders, and innovative self-supervised learning strategies including contrastive learning, masked image modeling, and optimal transport, often incorporating techniques to handle the diverse dimensionality (2D and 3D) of medical images. These advancements are improving the performance of downstream tasks like image classification and segmentation, particularly in areas with limited expert annotation, such as fundus disease diagnosis and low-dose CT denoising. The resulting models hold significant potential for improving healthcare accessibility and efficiency.