Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Histopathology image embedding based on foundation models features aggregation for patient treatment response prediction
Bilel Guetarni, Feryal Windal, Halim Benhabiles, Mahfoud Chaibi, Romain Dubois, Emmanuelle Leteurtre, Dominique Collard
A Comprehensive Survey of LLM Alignment Techniques: RLHF, RLAIF, PPO, DPO and More
Zhichao Wang, Bin Bi, Shiva Kumar Pentyala, Kiran Ramnath, Sougata Chaudhuri, Shubham Mehrotra, Zixu, Zhu, Xiang-Bo Mao, Sitaram Asur, Na, Cheng
CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning
Erum Mushtaq, Duygu Nur Yaldiz, Yavuz Faruk Bakman, Jie Ding, Chenyang Tao, Dimitrios Dimitriadis, Salman Avestimehr
An efficient framework based on large foundation model for cervical cytopathology whole slide image screening
Jialong Huang, Gaojie Li, Shichao Kan, Jianfeng Liu, Yixiong Liang
Efficient Unsupervised Visual Representation Learning with Explicit Cluster Balancing
Ioannis Maniadis Metaxas, Georgios Tzimiropoulos, Ioannis Patras
Joint-Embedding Predictive Architecture for Self-Supervised Learning of Mask Classification Architecture
Dong-Hee Kim, Sungduk Cho, Hyeonwoo Cho, Chanmin Park, Jinyoung Kim, Won Hwa Kim
TE-SSL: Time and Event-aware Self Supervised Learning for Alzheimer's Disease Progression Analysis
Jacob Thrasher, Alina Devkota, Ahmed Tafti, Binod Bhattarai, Prashnna Gyawali
A Clinical Benchmark of Public Self-Supervised Pathology Foundation Models
Gabriele Campanella, Shengjia Chen, Ruchika Verma, Jennifer Zeng, Aryeh Stock, Matt Croken, Brandon Veremis, Abdulkadir Elmas, Kuan-lin Huang, Ricky Kwan, Jane Houldsworth, Adam J. Schoenfeld, Chad Vanderbilt
AnatoMask: Enhancing Medical Image Segmentation with Reconstruction-guided Self-masking
Yuheng Li, Tianyu Luan, Yizhou Wu, Shaoyan Pan, Yenho Chen, Xiaofeng Yang
How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks
Etai Littwin, Omid Saremi, Madhu Advani, Vimal Thilak, Preetum Nakkiran, Chen Huang, Joshua Susskind
Precision at Scale: Domain-Specific Datasets On-Demand
Jesús M Rodríguez-de-Vera, Imanol G Estepa, Ignacio Sarasúa, Bhalaji Nagarajan, Petia Radeva
LLMcap: Large Language Model for Unsupervised PCAP Failure Detection
Lukasz Tulczyjew, Kinan Jarrah, Charles Abondo, Dina Bennett, Nathanael Weill