Word Embeddings
Word embeddings are dense vector representations of words, capturing semantic meaning and relationships within a numerical space. Current research focuses on improving embedding quality through contextualization (considering surrounding words), addressing biases, and extending their application to low-resource languages and specialized domains like medicine, using architectures such as transformers and graph convolutional networks. These advancements enhance various NLP tasks, including text classification, question answering, and information retrieval, impacting fields ranging from education to healthcare through improved accuracy and interpretability of language models.
Papers
May 12, 2023
April 28, 2023
April 27, 2023
April 20, 2023
April 14, 2023
April 5, 2023
April 4, 2023
March 30, 2023
March 23, 2023
March 21, 2023
March 7, 2023
February 23, 2023
February 18, 2023
February 13, 2023
February 11, 2023
January 11, 2023
January 9, 2023
January 2, 2023
December 19, 2022