Semantic Similarity
Semantic similarity research focuses on computationally measuring the degree of meaning overlap between pieces of text, enabling tasks like information retrieval and knowledge graph construction. Current research emphasizes leveraging large language models (LLMs) and transformer architectures, often incorporating techniques like contrastive learning and graph-based methods to capture both semantic and structural relationships. This work is crucial for advancing various NLP applications, including question answering, document summarization, and cross-lingual understanding, as well as improving the efficiency and interpretability of these models.
Papers
TexIm FAST: Text-to-Image Representation for Semantic Similarity Evaluation using Transformers
Wazib Ansar, Saptarsi Goswami, Amlan Chakrabarti
Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe
Alicja Ziarko, Albert Q. Jiang, Bartosz Piotrowski, Wenda Li, Mateja Jamnik, Piotr Miłoś