Text Embeddings
Text embeddings are numerical representations of text that capture semantic meaning, enabling computers to understand and process language. Current research focuses on improving the quality and controllability of these embeddings, particularly through techniques like contrastive learning, fine-tuning large language models (LLMs), and developing novel architectures to better handle complex prompts and disentangle attributes within embeddings. These advancements are crucial for various applications, including image generation, information retrieval, and sentiment analysis, improving the performance and efficiency of numerous natural language processing tasks.
Papers
Three ways to improve feature alignment for open vocabulary detection
Relja Arandjelović, Alex Andonian, Arthur Mensch, Olivier J. Hénaff, Jean-Baptiste Alayrac, Andrew Zisserman
Medical diffusion on a budget: Textual Inversion for medical image generation
Bram de Wilde, Anindo Saha, Maarten de Rooij, Henkjan Huisman, Geert Litjens