Self Supervised Training
Self-supervised learning (SSL) trains machine learning models on unlabeled data by formulating pretext tasks that encourage the model to learn useful representations without explicit human annotations. Current research focuses on improving the efficiency and effectiveness of SSL across diverse domains, including speech processing (using architectures like FastConformer), image analysis (leveraging autoencoders and contrastive learning), and medical imaging (incorporating temporal and event information). The ability of SSL to leverage vast amounts of unlabeled data makes it increasingly significant for applications where labeled data is scarce or expensive, leading to advancements in various fields from healthcare to remote sensing.
Papers
October 18, 2024
October 9, 2024
September 23, 2024
September 3, 2024
August 23, 2024
August 21, 2024
August 17, 2024
August 4, 2024
July 9, 2024
June 9, 2024
May 16, 2024
May 7, 2024
January 12, 2024
January 4, 2024
January 1, 2024
November 22, 2023
November 19, 2023
September 21, 2023
August 12, 2023
June 21, 2023