Sense Embeddings
Sense embeddings represent the different meanings (senses) of ambiguous words as distinct vectors in a high-dimensional space, aiming to improve natural language processing tasks by resolving word ambiguity. Current research focuses on developing robust methods for learning these embeddings, often employing transformer-based architectures and exploring techniques like knowledge distillation and meta-learning to combine information from multiple sources. This work is significant because effectively capturing word senses is crucial for enhancing the accuracy and interpretability of various NLP applications, including word sense disambiguation, semantic change detection, and bias mitigation in language models.
Papers
September 19, 2024
March 1, 2024
October 19, 2023
July 25, 2023
May 30, 2023
May 26, 2023
April 20, 2023
October 26, 2022