Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Unveiling the Potential of Probabilistic Embeddings in Self-Supervised Learning
Denis Janiak, Jakub Binkowski, Piotr Bielak, Tomasz Kajdanowicz
TorchAudio 2.1: Advancing speech recognition, self-supervised learning, and audio processing components for PyTorch
Jeff Hwang, Moto Hira, Caroline Chen, Xiaohui Zhang, Zhaoheng Ni, Guangzhi Sun, Pingchuan Ma, Ruizhe Huang, Vineel Pratap, Yuekai Zhang, Anurag Kumar, Chin-Yun Yu, Chuang Zhu, Chunxi Liu, Jacob Kahn, Mirco Ravanelli, Peng Sun, Shinji Watanabe, Yangyang Shi, Yumeng Tao, Robin Scheibler, Samuele Cornell, Sean Kim, Stavros Petridis
Combating Representation Learning Disparity with Geometric Harmonization
Zhihan Zhou, Jiangchao Yao, Feng Hong, Ya Zhang, Bo Han, Yanfeng Wang
Towards Matching Phones and Speech Representations
Gene-Ping Yang, Hao Tang
Weakly-Supervised Surgical Phase Recognition
Roy Hirsch, Regev Cohen, Mathilde Caron, Tomer Golany, Daniel Freedman, Ehud Rivlin