Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers - Page 11
CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning
Erum Mushtaq, Duygu Nur Yaldiz, Yavuz Faruk Bakman, Jie Ding, Chenyang Tao, Dimitrios Dimitriadis, Salman AvestimehrAn efficient framework based on large foundation model for cervical cytopathology whole slide image screening
Jialong Huang, Gaojie Li, Shichao Kan, Jianfeng Liu, Yixiong Liang
Efficient Unsupervised Visual Representation Learning with Explicit Cluster Balancing
Ioannis Maniadis Metaxas, Georgios Tzimiropoulos, Ioannis PatrasJoint-Embedding Predictive Architecture for Self-Supervised Learning of Mask Classification Architecture
Dong-Hee Kim, Sungduk Cho, Hyeonwoo Cho, Chanmin Park, Jinyoung Kim, Won Hwa Kim
TE-SSL: Time and Event-aware Self Supervised Learning for Alzheimer's Disease Progression Analysis
Jacob Thrasher, Alina Devkota, Ahmed Tafti, Binod Bhattarai, Prashnna GyawaliA Clinical Benchmark of Public Self-Supervised Pathology Foundation Models
Gabriele Campanella, Shengjia Chen, Ruchika Verma, Jennifer Zeng, Aryeh Stock, Matt Croken, Brandon Veremis, Abdulkadir Elmas, Kuan-lin Huang+4AnatoMask: Enhancing Medical Image Segmentation with Reconstruction-guided Self-masking
Yuheng Li, Tianyu Luan, Yizhou Wu, Shaoyan Pan, Yenho Chen, Xiaofeng Yang
How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks
Etai Littwin, Omid Saremi, Madhu Advani, Vimal Thilak, Preetum Nakkiran, Chen Huang, Joshua SusskindPrecision at Scale: Domain-Specific Datasets On-Demand
Jesús M Rodríguez-de-Vera, Imanol G Estepa, Ignacio Sarasúa, Bhalaji Nagarajan, Petia RadevaLLMcap: Large Language Model for Unsupervised PCAP Failure Detection
Lukasz Tulczyjew, Kinan Jarrah, Charles Abondo, Dina Bennett, Nathanael WeillLearning from Memory: Non-Parametric Memory Augmented Self-Supervised Learning of Visual Features
Thalles Silva, Helio Pedrini, Adín Ramírez Rivera