Word Embeddings
Word embeddings are dense vector representations of words, capturing semantic meaning and relationships within a numerical space. Current research focuses on improving embedding quality through contextualization (considering surrounding words), addressing biases, and extending their application to low-resource languages and specialized domains like medicine, using architectures such as transformers and graph convolutional networks. These advancements enhance various NLP tasks, including text classification, question answering, and information retrieval, impacting fields ranging from education to healthcare through improved accuracy and interpretability of language models.
Papers
March 27, 2024
March 26, 2024
March 24, 2024
February 20, 2024
February 19, 2024
February 16, 2024
February 12, 2024
February 10, 2024
February 4, 2024
January 27, 2024
January 23, 2024
January 15, 2024
January 11, 2024
January 9, 2024
December 31, 2023
December 28, 2023
December 26, 2023
December 18, 2023
December 10, 2023