Text Embeddings
Text embeddings are numerical representations of text that capture semantic meaning, enabling computers to understand and process language. Current research focuses on improving the quality and controllability of these embeddings, particularly through techniques like contrastive learning, fine-tuning large language models (LLMs), and developing novel architectures to better handle complex prompts and disentangle attributes within embeddings. These advancements are crucial for various applications, including image generation, information retrieval, and sentiment analysis, improving the performance and efficiency of numerous natural language processing tasks.
Papers
Knowledge Graph Completion using Structural and Textual Embeddings
Sakher Khalil Alqaaidi, Krzysztof Kochut
CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data
Sachin Mehta, Maxwell Horton, Fartash Faghri, Mohammad Hossein Sekhavat, Mahyar Najibi, Mehrdad Farajtabar, Oncel Tuzel, Mohammad Rastegari
Text Is MASS: Modeling as Stochastic Embedding for Text-Video Retrieval
Jiamian Wang, Guohao Sun, Pichao Wang, Dongfang Liu, Sohail Dianat, Majid Rabbani, Raghuveer Rao, Zhiqiang Tao
MapGuide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities
Xinpei Zhao, Jingyuan Sun, Shaonan Wang, Jing Ye, Xiaohan Zhang, Chengqing Zong