Word Embeddings
Word embeddings are dense vector representations of words, capturing semantic meaning and relationships within a numerical space. Current research focuses on improving embedding quality through contextualization (considering surrounding words), addressing biases, and extending their application to low-resource languages and specialized domains like medicine, using architectures such as transformers and graph convolutional networks. These advancements enhance various NLP tasks, including text classification, question answering, and information retrieval, impacting fields ranging from education to healthcare through improved accuracy and interpretability of language models.
Papers
Word Embeddings and Validity Indexes in Fuzzy Clustering
Danial Toufani-Movaghar, Mohammad-Reza Feizi-Derakhshi
From Hyperbolic Geometry Back to Word Embeddings
Sultan Nurmukhamedov, Thomas Mach, Arsen Sheverdin, Zhenisbek Assylbekov
Approach to Predicting News -- A Precise Multi-LSTM Network With BERT
Chia-Lin Chen, Pei-Yu Huang, Yi-Ting Huang, Chun Lin