Output Embeddings
Output embeddings, vector representations of model outputs, are central to improving various natural language processing (NLP) tasks. Current research focuses on refining these embeddings through contrastive learning, leveraging large language models (LLMs) for improved context modeling, and employing techniques like targeted concept removal to address bias and enhance fairness. These advancements are impacting downstream applications, such as improved automatic speech recognition, more effective task-oriented dialog systems, and enhanced knowledge graph embedding, ultimately leading to more robust and reliable NLP systems.
Papers
September 25, 2024
September 12, 2024
August 30, 2024
June 10, 2024
June 3, 2024
April 11, 2024
December 11, 2023
November 9, 2023
October 24, 2023
October 13, 2023
September 15, 2023
March 15, 2023
February 13, 2023
May 10, 2022
March 21, 2022
December 31, 2021