Sense Embeddings

Sense embeddings represent the different meanings (senses) of ambiguous words as distinct vectors in a high-dimensional space, aiming to improve natural language processing tasks by resolving word ambiguity. Current research focuses on developing robust methods for learning these embeddings, often employing transformer-based architectures and exploring techniques like knowledge distillation and meta-learning to combine information from multiple sources. This work is significant because effectively capturing word senses is crucial for enhancing the accuracy and interpretability of various NLP applications, including word sense disambiguation, semantic change detection, and bias mitigation in language models.

Papers