Contextual Representation
Contextual representation focuses on creating data representations that capture the meaning and significance of information within its surrounding context, going beyond simple word or feature embeddings. Current research emphasizes improving contextual representations through various techniques, including transformer-based architectures like BERT and specialized attention mechanisms (e.g., Gaussian Adaptive Attention), and by incorporating additional information such as knowledge graphs or label semantics. These advancements are driving improvements in diverse applications, such as natural language processing, computer vision, and speech recognition, by enabling more accurate and robust models for tasks like machine translation, semantic segmentation, and emotion recognition. The resulting enhanced contextual understanding has significant implications for various fields, improving the performance and interpretability of machine learning models.
Papers
Assessment of contextualised representations in detecting outcome phrases in clinical trials
Micheal Abaho, Danushka Bollegala, Paula R Williamson, Susanna Dodd
ET-BERT: A Contextualized Datagram Representation with Pre-training Transformers for Encrypted Traffic Classification
Xinjie Lin, Gang Xiong, Gaopeng Gou, Zhen Li, Junzheng Shi, Jing Yu