Word Embeddings
Word embeddings are dense vector representations of words, capturing semantic meaning and relationships within a numerical space. Current research focuses on improving embedding quality through contextualization (considering surrounding words), addressing biases, and extending their application to low-resource languages and specialized domains like medicine, using architectures such as transformers and graph convolutional networks. These advancements enhance various NLP tasks, including text classification, question answering, and information retrieval, impacting fields ranging from education to healthcare through improved accuracy and interpretability of language models.
Papers
September 21, 2022
September 17, 2022
August 23, 2022
August 20, 2022
August 9, 2022
August 2, 2022
July 28, 2022
July 19, 2022
July 17, 2022
July 14, 2022
July 5, 2022
June 30, 2022
June 23, 2022
June 17, 2022
June 16, 2022
June 13, 2022
June 8, 2022
June 7, 2022