Pre Trained Text
Pre-trained text encoders are foundational models in natural language processing, aiming to learn general-purpose text representations from massive datasets for downstream tasks. Current research focuses on improving efficiency (e.g., through lightweight models and sparse training), enhancing zero-shot capabilities (e.g., via prototype shifting and contrastive learning), and addressing limitations like domain shift and knowledge gaps (e.g., by incorporating generative LLMs and visual information). These advancements are significantly impacting various fields, enabling improved performance in tasks ranging from image retrieval and generation to intent classification and medical image analysis.
Papers
August 2, 2024
June 17, 2024
June 16, 2024
June 13, 2024
March 19, 2024
September 12, 2023
July 27, 2023
June 1, 2023
May 23, 2023
March 18, 2023
January 21, 2023
November 4, 2022
October 12, 2022
April 7, 2022