Jina Embeddings
Jina embeddings are vector representations of data, primarily text and images, designed to capture semantic meaning and relationships for improved information retrieval and downstream tasks. Current research focuses on enhancing embedding quality through novel loss functions (e.g., SimO loss for fine-grained contrastive learning), developing efficient architectures like decoupled embeddings for handling large datasets and multilingual contexts, and exploring non-Euclidean spaces (e.g., hyperbolic space) to better represent complex relationships. These advancements are improving performance in diverse applications, including recommendation systems, question answering, and even cybersecurity by enabling more accurate similarity searches and more effective model training.
Papers
One-Class Learning with Adaptive Centroid Shift for Audio Deepfake Detection
Hyun Myung Kim, Kangwook Jang, Hoirin Kim
Are there identifiable structural parts in the sentence embedding whole?
Vivi Nastase, Paola Merlo
Homomorphisms and Embeddings of STRIPS Planning Models
Arnaud Lequen, Martin C. Cooper, Frédéric Maris