Contextualised Word
Contextualized word embeddings aim to capture the nuanced meanings of words as they shift across different contexts and time periods. Current research focuses on improving the interpretability of these embeddings, often using techniques like principal component analysis to understand how meaning changes are encoded within high-dimensional vector spaces, and exploring linguistically-motivated models as alternatives to complex, "black box" neural networks like transformers. This work is significant for advancing natural language processing tasks such as sarcasm detection and semantic change analysis, offering more accurate and explainable models for understanding human language.
Papers
Interpretable Word Sense Representations via Definition Generation: The Case of Semantic Change Analysis
Mario Giulianelli, Iris Luden, Raquel Fernandez, Andrey Kutuzov
Contextualized Word Vector-based Methods for Discovering Semantic Differences with No Training nor Word Alignment
Ryo Nagata, Hiroya Takamura, Naoki Otani, Yoshifumi Kawasaki