Contextualised Word

Contextualized word embeddings aim to capture the nuanced meanings of words as they shift across different contexts and time periods. Current research focuses on improving the interpretability of these embeddings, often using techniques like principal component analysis to understand how meaning changes are encoded within high-dimensional vector spaces, and exploring linguistically-motivated models as alternatives to complex, "black box" neural networks like transformers. This work is significant for advancing natural language processing tasks such as sarcasm detection and semantic change analysis, offering more accurate and explainable models for understanding human language.

Papers