Jina Embeddings
Jina embeddings are vector representations of data, primarily text and images, designed to capture semantic meaning and relationships for improved information retrieval and downstream tasks. Current research focuses on enhancing embedding quality through novel loss functions (e.g., SimO loss for fine-grained contrastive learning), developing efficient architectures like decoupled embeddings for handling large datasets and multilingual contexts, and exploring non-Euclidean spaces (e.g., hyperbolic space) to better represent complex relationships. These advancements are improving performance in diverse applications, including recommendation systems, question answering, and even cybersecurity by enabling more accurate similarity searches and more effective model training.
Papers
On the Surprising Behaviour of node2vec
Celia Hacker, Bastian Rieck
TransDrift: Modeling Word-Embedding Drift using Transformer
Nishtha Madaan, Prateek Chaudhury, Nishant Kumar, Srikanta Bedathur
Towards Better Understanding with Uniformity and Explicit Regularization of Embeddings in Embedding-based Neural Topic Models
Wei Shao, Lei Huang, Shuqi Liu, Shihua Ma, Linqi Song