Self Supervised Training
Self-supervised learning (SSL) trains machine learning models on unlabeled data by formulating pretext tasks that encourage the model to learn useful representations without explicit human annotations. Current research focuses on improving the efficiency and effectiveness of SSL across diverse domains, including speech processing (using architectures like FastConformer), image analysis (leveraging autoencoders and contrastive learning), and medical imaging (incorporating temporal and event information). The ability of SSL to leverage vast amounts of unlabeled data makes it increasingly significant for applications where labeled data is scarce or expensive, leading to advancements in various fields from healthcare to remote sensing.
Papers
June 19, 2023
June 8, 2023
May 30, 2023
April 20, 2023
February 27, 2023
February 16, 2023
January 5, 2023
December 15, 2022
December 9, 2022
October 27, 2022
October 25, 2022
September 23, 2022
June 27, 2022
June 23, 2022
May 18, 2022
May 3, 2022
April 10, 2022
April 1, 2022
March 18, 2022
February 16, 2022