Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization
Zi-Hao Qiu, Quanqi Hu, Zhuoning Yuan, Denny Zhou, Lijun Zhang, Tianbao Yang
S-JEA: Stacked Joint Embedding Architectures for Self-Supervised Visual Representation Learning
Alžběta Manová, Aiden Durrant, Georgios Leontidis
Zero-Shot Text Classification via Self-Supervised Tuning
Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing
Using Spatio-Temporal Dual-Stream Network with Self-Supervised Learning for Lung Tumor Classification on Radial Probe Endobronchial Ultrasound Video
Ching-Kai Lin, Chin-Wen Chen, Yun-Chien Cheng
Self-Supervised Learning for Organs At Risk and Tumor Segmentation with Uncertainty Quantification
Ilkin Isler, Debesh Jha, Curtis Lisle, Justin Rineer, Patrick Kelly, Bulent Aydogan, Mohamed Abazeed, Damla Turgut, Ulas Bagci