Semantic Similarity
Semantic similarity research focuses on computationally measuring the degree of meaning overlap between pieces of text, enabling tasks like information retrieval and knowledge graph construction. Current research emphasizes leveraging large language models (LLMs) and transformer architectures, often incorporating techniques like contrastive learning and graph-based methods to capture both semantic and structural relationships. This work is crucial for advancing various NLP applications, including question answering, document summarization, and cross-lingual understanding, as well as improving the efficiency and interpretability of these models.
Papers
TexIm FAST: Text-to-Image Representation for Semantic Similarity Evaluation using Transformers
Wazib Ansar, Saptarsi Goswami, Amlan Chakrabarti
Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe
Alicja Ziarko, Albert Q. Jiang, Bartosz Piotrowski, Wenda Li, Mateja Jamnik, Piotr Miłoś
Semantic Similarity Score for Measuring Visual Similarity at Semantic Level
Senran Fan, Zhicheng Bao, Chen Dong, Haotai Liang, Xiaodong Xu, Ping Zhang
Linguistically Conditioned Semantic Textual Similarity
Jingxuan Tu, Keer Xu, Liulu Yue, Bingyang Ye, Kyeongmin Rim, James Pustejovsky
SemEval-2024 Task 1: Semantic Textual Relatedness for African and Asian Languages
Nedjma Ousidhoum, Shamsuddeen Hassan Muhammad, Mohamed Abdalla, Idris Abdulmumin, Ibrahim Said Ahmad, Sanchit Ahuja, Alham Fikri Aji, Vladimir Araujo, Meriem Beloucif, Christine De Kock, Oumaima Hourrane, Manish Shrivastava, Thamar Solorio, Nirmal Surange, Krishnapriya Vishnubhotla, Seid Muhie Yimam, Saif M. Mohammad
Evaluation of Semantic Search and its Role in Retrieved-Augmented-Generation (RAG) for Arabic Language
Ali Mahboub, Muhy Eddin Za'ter, Bashar Al-Rfooh, Yazan Estaitia, Adnan Jaljuli, Asma Hakouz