Self Supervised Training
Self-supervised learning (SSL) trains machine learning models on unlabeled data by formulating pretext tasks that encourage the model to learn useful representations without explicit human annotations. Current research focuses on improving the efficiency and effectiveness of SSL across diverse domains, including speech processing (using architectures like FastConformer), image analysis (leveraging autoencoders and contrastive learning), and medical imaging (incorporating temporal and event information). The ability of SSL to leverage vast amounts of unlabeled data makes it increasingly significant for applications where labeled data is scarce or expensive, leading to advancements in various fields from healthcare to remote sensing.
Papers
February 8, 2022
December 10, 2021
December 4, 2021
December 2, 2021
November 15, 2021
November 9, 2021