Self Supervised Technique
Self-supervised learning techniques aim to train machine learning models on unlabeled data by creating pretext tasks that implicitly capture underlying data structure. Current research focuses on developing efficient algorithms, such as contrastive learning and Siamese networks, often incorporating multiple architectures (e.g., CNNs and Transformers) for improved performance and robustness across diverse data modalities (images, audio, text, time series). These methods are proving valuable in various applications, including healthcare, anomaly detection, and natural language processing, particularly where labeled data is scarce or expensive to obtain, thereby advancing the field of representation learning and enabling more efficient model training.
Papers
Towards Label-efficient Automatic Diagnosis and Analysis: A Comprehensive Survey of Advanced Deep Learning-based Weakly-supervised, Semi-supervised and Self-supervised Techniques in Histopathological Image Analysis
Linhao Qu, Siyu Liu, Xiaoyu Liu, Manning Wang, Zhijian Song
MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D Segmentation
Gopal Sharma, Kangxue Yin, Subhransu Maji, Evangelos Kalogerakis, Or Litany, Sanja Fidler